Richard Thaler

You are currently browsing articles tagged Richard Thaler.

More information readily available to us–more than we ever dreamed we could possess–has not clearly improved our decision-making process. Why? Perhaps, like Chauncey Gardener, we like to watch, but what we really love is to see what we want to see. Or maybe we just can’t assimilate the endless reams of virtual data.

In an Ask Me Anything at Reddit, behavioral economist Richard Thaler, who’s just published Misbehaving, has an interesting idea: What about online decision engines that help with practical problems the way Expedia does with travel itineraries? Not something feckless like the former Ask Jeeves, but a machine wiser and deeper.

Such a nudge would bring about all sorts of ethical questions. Should we be offloading decisions (or even a significant part of them) to algorithms? Are the people writing the coding manipulating us? But it would be a fascinating experiment. 

The exchange:

Question:

Do you think, with rapid advances in data collection, machine learning, ubiquity of technology that lowers barrier for precise calculation/ data interpretation etc, consumers/ humans will start to behave more like Econs? Do you think that would be OPTIMAL, i.e in our best interests? It seems a big ‘flaw’ in AI/ robotics right now is that they are not ‘human like,’ i.e. they are too much like Econs, they make no mistakes and always make optimal choices. Do you think it’s more optimal for human to become more like robots/ machine that make no ‘irrational’ errors? Do you think it would eventually become that way when technology makes it much lower efforts to actually evaluate rather than rely on intuitive heuristics?

Richard Thaler:

Two parts to this.

One is: I’ve long advocated using big data to help people make better decisions, an effort i call “smart disclosure.” I’ve a couple of New York Times columns devoted to this topic. The idea is that by making better data available, we can create new businesses that I call “choice engines.”

Think of them like travel websites, that would make, say, choosing a mortgage as easy as finding a plane ticket from New York to Chicago.

More generally, however, the goal is not to turn humans into Econs. Econs (*not economists) are jerks.

Econs don’t leave tips at restaurants they never intend to go back to. Don’t contribute to NPR. And don’t bother to vote.•

 

Tags:

When Thomas Friedman devised his Golden Arches Theory of war, he somehow forgot that humans aren’t rational creatures. Our decisions are often perplexing, and the few among us who make sober, clear-headed calculations are frequently viewed as something other than human. What’s wrong with them?

In a Bloomberg View piece about Richard Thaler’s professional memoir, Misbehaving, Michael Lewis writes of an economist who had a simple-yet-significant epiphany: We make screwy choices, often to our own detriment.

An excerpt:

At any rate, in addition to calculating the market’s price for a human life, Thaler got distracted by how much fun he might have if he asked actual human beings how much they needed to be paid to run the risk of dying. He began with his own students, telling them to imagine that by attending his lecture, they had exposed themselves to a rare fatal disease. There was a 1 in 1,000 chance they had caught it. There was a single dose of the antidote: How much would they be willing to pay for it?

Then he asked them the same question, in a different way: How much would they demand to be paid to attend a lecture in which there is a 1 in 1,000 chance of contracting a rare fatal disease, for which there was no antidote?

The questions were practically identical, but the answers people gave to them were — and are — wildly different. People would say they would pay two grand for the antidote, for instance, but would need to be paid half a million dollars to expose themselves to the virus. “Economic theory is not alone in saying that the answers should be identical,” writes Thaler. “Logical consistency demands it. … To an economist, these findings are somewhere between puzzling and preposterous. I showed them to (his thesis adviser) and he told me to stop wasting my time and get back to work on my thesis.”

Instead, Thaler began to keep a list of things that people did that made a mockery of economic models of rational choice. There was the guy who planned to go to the football game, changed his mind when he saw it was snowing, and then, when he realized he had already bought the ticket, changed his mind again. There was the other guy who refused to pay $10 to have someone mow his lawn but wouldn’t accept $20 to mow his neighbor’s. There was the woman who drove 10 minutes to a store in order to save $10 on a $45 clock radio but wouldn’t drive the same amount of time to save $10 on a $495 television. There were the people Thaler invited over to dinner, to whom he offered, before dinner, a giant bowl of nuts. They ate so many nuts they had no appetite for the far more appealing meal. The next time they came to dinner Thaler didn’t offer nuts — and his guests were happier.

Tags: ,

An Economist article about Richard Thaler’s new book, Misbehaving: The Making of Behavioral Economics, looks at how this sub-field of the dismal science still receives resistance when applied beyond the granular level, when looking at the illogic of the broader economy as opposed to the folly of the individual. The opening:

CAB drivers have good days and bad days, depending on the weather or special events such as a convention. If they were rational, they would work hardest on the good days (to maximise their take) but give up early when fares are few and far between. In fact, they do the opposite. It seems they have a mental target for their desired daily income and they work long enough to reach it, even though that means working longer on slow days and going home early when fares are plentiful.

Human beings are not always logical. We treat windfall gains differently from our monthly salary. We value things that we already own more highly than equivalent things we could easily buy. Our responses to questions depends very much on how the issue is framed: we think surcharges on credit-card payments are unfair, but believe a discount for paying with cash is reasonable.

None of these foibles will be a surprise to, well, humans. But they are not allowed for in many macroeconomic models, which tend to assume people actually come from the planet Vulcan, all coolly maximising their utility at every stage. Over the past 30-40 years, in contrast, behavioural economists have explored the way that individuals actually make decisions, and have concluded that we are more Kirk than Spock.•

Tags:

An 1895 chart of phrenology.

Over at Edge, economist Richard Thaler asked the science site’s contributors for responses to this question: “The flat earth and geocentric world are examples of wrong scientific beliefs that were held for long periods. Can you name your favorite example and for extra credit why it was believed to be true?”

I think philosopher Eduardo Salcedo-Albarán had the most intriguing and humane answer:

“Phrenology and lobotomy. Even when these were not scientific paradigms, they clearly illustrate how science affects people’s life and morality. For those not engaged in the scientific work, it is easy to forget that technology, and a great part of the western contemporary culture, results from science. However, people tend to interpret scientific principles and findings as strange matters that have nothing to do with everyday life, from gravity and evolution, to physics and pharmacology.

Phrenology is defined as the ‘scientific’ relation between the skull’s shape and behavioral traits. It was applied to understand, for example, the reason for the genius of Professor Samuel B. F. Morse. However, it was also applied in prisons and asylums to explicate and predict criminal behaviors. In fact, it was also assumed that the skull’s shape explained incapacities to act according to the law. If you were spending your life in an asylum or a prison in 19th century because of a phrenological ‘proof’ or ‘argument,’ you could perfectly understand how important science in your life is, even if you are not a scientist. Even more, if you were going to be a lobotomy’s patient in the past century.

In 1949, Antonio Egas Moniz achieved the Nobel Prize of Physiology and Medicine for discovering the great therapeutic value of lobotomy, a surgical procedure that, in its transorbital versions, consisted of introducing an ice pick through the eye’s orbit to disconnect the prefrontal cortex. Thousands of lobotomies were performed between the decade of 1940’s and the first years of 1960’s, including Rosemary Kennedy, sister of President John F. Kennedy, on the list of recipients; all of them with the scientific seal of a Nobel Prize. Today, half a century later, it seems unthinkable to apply such a ‘scientific’ therapy. I keep asking myself: ‘what if’ a mistake like this one is adopted today as policy on public health?

Science affects people’s lives directly. A scientific mistake can send you to jail or break your brain into pieces. It also seems to affect the kinds of moral stances that we adopt. Today, it would be morally reprehensible to send someone to jail because of the shape of his head, or to perform a lobotomy. However, 50 or 100 years ago it was morally acceptable. This is why we should spend more time thinking of practical issues, like scientific principles, scientific models and scientific predictions as a basis for public health and policy decisions, rather than guessing about what is right or wrong according to god’s mind or the unsubstantiated beliefs presented by special interest groups.”

Tags: , , ,