Can it be rational to overweight very unlikely events?

In July 2016, Jakub Steiner, CERGE-EI Associate Professor, published an article called “Perceiving Prospects Properly,” co-authored with Colin Stewart, in the American Economic Review. Subsequently, Tim Hyde wrote a short article introducing the main points of the paper, which can be enjoyed below.

Doc. Mgr. JAKUB STEINER, PhD.

Jakub Steiner is a Czech economist doing research oriented on game theory and economic theory. He studies behavior in strategic situations with the possibility of self-fulfilling prophecies such as those that arise during currency attacks, bank runs, and revolutions.

Since 2012, Jakub Steiner has been an Associate Professor with tenure at CERGE-EI (under US permanent charter) and a member of the Executive and Supervisory Committee of CERGE-EI. Read his bio.

 

Can it be rational to overweight very unlikely events? [1]
Why this behavioral bias might actually be a canny evolutionary strategy

Using tools gleaned from behavioral economics to help people make better decisions is all the rage these days. This work is built on the theory that behavioral biases – like the sunk cost fallacy, status quo bias, or the tendency to give outsize attention to very unlikely events – are cognitive mistakes that are holding us back. If we can somehow suppress these biases, people will make better choices about saving for retirement, purchasing insurance, and avoiding self-destructive behaviors like smoking or drugs.

But how confident are we that these behavioral biases are a bad thing? Eons of natural selection have failed to weed them out of the human gene pool, so they might serve some sort of purpose after all. A new paper appearing in the American Economic Review argues that at least one apparent behavioral “mistake” could make a surprising amount of evolutionary sense.

In Perceiving Prospects Properly (PDF), authors Jakub Steiner and Colin Stewart focus on the tendency of people to overweight small probabilities in their mind, making long-odds gambles more appealing and making small risks of catastrophic loss seem more scary. A famous early study of behavioral biases by Daniel Kahneman and Amos Tversky showed that test subjects did this when asked about various hypothetical lotteries. Subsequent evidence from various studies in different settings around the world show that people routinely inflate small probabilities when answering these types of questions.

Evidence from outside the lab shows that people play the lottery and bet on longshots at the racetrack more often than they would if they didn’t overweight small probabilities. This is distinct from the problem of people overestimating the chances of rare but graphic risks like shark attacks or catching Ebola. Even when people know for a fact what the odds are – as in a state lottery or the laboratory experiments discussed above — those unlikely events tend to get more attention.

To explain this apparent quirk of human decision making, the authors adopt the emerging economic view of evolution as a kind of principal-agent problem. Through the selective pressure of evolution, nature is giving humankind the tools it needs to survive in a harsh and uncertain world, and must optimize those tools to deal with physical constraints in neural processing.

The authors note recent developments in neuroeconomic research emphasizing the diffuse decision process in the human brain. It seems that when you are offered a gamble, like the choice between a 1% chance of winning $1,000 and a sure payout of $5, different parts of your brain react to different parts of the proposal.

One brain region might concern itself with figuring out just how unlikely a 1% chance actually feels, while a separate part of the brain might be considering just how nice it would be to have $1,000 in winnings versus a meager $5 payment. The brain science is far from settled, but the authors’ theory is that overweighting of small probabilities might be a good thing if the different parts of our brains are prone to communication errors during this decision process.

The authors imagine an evolutionary setting for early humans where most gambles aren’t worth taking because the rewards aren’t that high – it’s usually better to stay with the rest of the tribe rather than chase a wild boar over the hill, for example.

But every so often, people will make mental errors calculating the risks and decide to take a gamble when they really shouldn’t. In a world with a lot of hazards and few major rewards, gambles that look good to humans are more likely to be the result of errors than truly good options – and Nature will end up with a lot of overconfident humans eaten by wild boars.

In this kind of unforgiving and uncertain world, where the human brain is routinely making errors in assessing gambles, Nature wants to discourage risk-taking. But instilling a blanket preference for never taking risks or never straying too far from home could be counterproductive if better risks start emerging over time. Put another way, people whose genes prevent them from ever taking risks will be outcompeted and eventually outnumbered by people who recognize a smart risk when they see one.

In the authors’ model, people are actually better off if the part of the brain that receives information about the likelihood of an outcome skews that information before sharing with another part of the brain that has to weigh the final decision. The first part of the brain will send a signal that a 1% chance is more like a 10% chance, or a 99% chance is more like a 90% chance.

This self-deception is useful because the message is naturally transmitted through the brain with error. Anticipating this, the first part of the brain distorts its signal so that even if the second part of the brain receives a modified signal that is slightly higher or lower, it will be less likely to think any one outcome is very improbable or almost assured.

Assuming that most gambles are actually bad gambles and that opportunities for huge payoffs are few and far between, the authors show this convoluted process will have the tendency to make the gamble less appealing. Since a gamble that looks good after an erroneous signal is still more likely than not to be a bad option, this helps humans avoid getting themselves into trouble.

In the modern era, people don’t have to worry as much about wild boars, but there are new sorts of gambles like state lotteries where there is a small chance of an enormous prize (a scenario that the authors argue was much more unusual in prehistoric times).

This trick of overweighting small probabilities can backfire in this context, and some people have figured out how to profit off of this apparent wrinkle in human decision making. When it comes to a gamble like the Powerball lottery, it’s astoundingly hard to win but extremely easy to imagine how fun winning would be.

With all of these novel man-made temptations, overweighting small probabilities may not be an evolutionary advantage anymore. But the authors argue that these findings should give us pause when we are trying to correct other behavioral biases we have discovered – they might be there for a reason.

 

“Perceiving Prospects Properly” appears in the July 2016 issue of the American Economic Review.

 

[1] HYDE, Tim. Can it be rational to overweight very unlikely events?: Why this behavioral bias might actually be a canny evolutionary strategy. In: American Economic Association [online]. [cit. 2016-08-02]. https://www.aeaweb.org/research/can-it-be-rational-overweight-unlikely-events

Share

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.