We 1st met these 2 scholars in the book "Misbehaving" by Richard Thaler, with whom they worked for a while. I blogged about that book November 15, 2015. I would recommend reading that post first, as in this post I give short shrift to the topics already covered in the earlier post. The 2 works together give a big picture of behavorial economics, with "Misbehaving" coming in from the economics side, and "Thinking, Fast And Slow" coming in from the psychological side.
The book is 512 pages long. It has an Introduction, a Conclusion, and 38 chapters in 5 parts. It also contains reprints of 2 of their groundbreaking papers:
- "Judgement Under Uncertainty: Heuristics and Biases"
- "Choices, Values, and Frames"
- Part I. Two Systems - 9 chapters
- Part II. Heuristics and Biases - 9 chapters
- Part III. Overconfidence - 6 chapters
- Part IV. Choices - 10 chapters
- Part V. Two Selves - 4 (short) chapters
Kahneman states his purpose in the Introduction:
improve the ability to identify and understand errors of judgment and choice, in others and eventually in ourselves, by providing a richer and more precise language to discuss them.Some previews of what we will be getting into:
My main aim here is to present a view of how the mind works that draws on recent developments in cognitive and social psychology. One of the more important developments is that we now understand the marvels as well as the flaws of intuitive thought.
This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.This next statement seems a little glib, maybe, but it makes sense.
our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in. We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events. Overconfidence is fed by the illusory certainty of hindsight.
the unfortunate tendency to treat problems in isolation, and with framing effects, where decisions are shaped by inconsequential features of choice problems.
The premise of this book is that it is easier to recognize other people's mistakes than our own.
Part I is spent exploring the 2 types of thinking named in the title. "Thinking, Fast" he refers to as "System 1", "Thinking, Slow" he refers to as "System 2".
System 1 is our fabulous subconscious intuition, which generally operates in a fraction of a second. But, to operate that quickly, it makes simplifying assumptions and uses heuristics which apparently had evolutionary value, but which are subject to numerous sources of error.
- System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.
- System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.
System 2 is our conscious mind, with its powers of reasoning, calculation, and concentration. It also is prone to errors, perhaps the worst of which is that it is lazy, and doesn't question System 1's snap judgements often enough.
One way you can tell when System 2 is engaged is by measuring the dilation of the pupils. When you concentrate your pupils dilate. Who knew?
the psychologist Eckhard Hess described the pupil of the eye as a window to the soul.System 2 has limited capabilities: apparently you can't use System 2 and walk at the same time:
[Hess's article] ends with two striking pictures of the same good-looking woman, who somehow appears much more attractive in one than in the other. There is only one difference: the pupils of the eyes appear dilated in the attractive picture and constricted in the other.
While walking comfortably with a friend, ask him to compute 23 × 78 in his head, and to do so immediately. He will almost certainly stop in his tracks.Another thing about the type of activities performed by System 2 is that using will power engages it and burns mental energy.
both self-control and cognitive effort are forms of mental work. Several psychological studies have shown that people who are simultaneously challenged by a demanding cognitive task and by a temptation are more likely to yield to the temptation.One of System 1's skills is association, as in free association. Association is subject to the priming effect - if you have been talking about eating, you will pick out words for food rather than others; if you have been shown words relating to old age, you will complete a task more slowly, like an old person. The 2nd instance is known as the ideomotor effect.
The phenomenon has been named ego depletion.
Reciprocal priming effects also exist. Plaster a smile on your face, you will be happier. Nod your head while listening to a message, you will tend to agree with it; shake your head, and the opposite occurs.
You can see why the common admonition to “act calm and kind regardless of how you feel” is very good advice: you are likely to be rewarded by actually feeling calm and kind.2 more interesting examples of priming. The 1st example involves money, which is very important in our culture, but perhaps not so much so in other societies.
money primes individualism: a reluctance to be involved with others, to depend on others, or to accept demands from others.The 2nd example is that after being asked to do something bad, people are primed to want to clean themselves.
Feeling that one’s soul is stained appears to trigger a desire to cleanse one’s body, an impulse that has been dubbed the “Lady Macbeth effect.”Kahneman talks about cognitive ease vs cognitive strain. System 1 is usually very happy to take the easy path. Some of the ways to create cognitive ease make you almost embarrassed for our minds' foolishness. Given 2 written answers, people who don't know the correct one will choose an answer in bold font over one in normal font.
Manipulating System 1 by creating cognitive ease can create truth illusions. The role that "repeated experience" plays in creating cognitive ease is known as the exposure effect.
the mere exposure effect is actually stronger for stimuli that the individual never consciously sees.The familiar makes us comfortable; the unusual makes us wary - the root of the urge to conservatism.
Survival prospects are poor for an animal that is not suspicious of novelty.In summary:
good mood, intuition, creativity, gullibility, and increased reliance on System 1 form a cluster. At the other pole, sadness, vigilance, suspicion, an analytic approach, and increased effort also go together.Further exploring System 1, Kahneman posits that it makes heavy use of norms.
We have norms for a vast number of categories, and these norms provide the background for the immediate detection of anomalies such as pregnant men and tattooed aristocrats.System 1 also likes to believe that things happen for a reason. It creates "the perception of intentional causality". It is also "a machine for jumping to conclusions".
When uncertain, System 1 bets on an answer, and the bets are guided by experience. The rules of the betting are intelligent: recent events and the current context have the most weight in determining an interpretation.System 2 is biased to go with the flow, to believe and confirm, rather than to unbelieve. And, "when System 2 is otherwise engaged, we will believe almost anything". Interesting work by psychologist Daniel Gilbert:
Gilbert proposed that understanding a statement must begin with an attempt to believe it: you must first know what the idea would mean if it were true. Only then can you decide whether or not to unbelieve it.Another flaw of System 1 is confirmation bias.
The confirmatory bias of System 1 favors uncritical acceptance of suggestions and exaggeration of the likelihood of extreme and improbable events.Then there is the halo effect.
The tendency to like (or dislike) everything about a person — including things you have not observed — is known as the halo effect.One very useful example of a technique to "decorrelate error" deals with conducting a meeting to discuss a topic. Kahneman recommends having everyone write down their thoughts on the subject before the meeting begins.
The procedure I adopted to tame the halo effect conforms to a general principle: decorrelate error!
The standard practice of open discussion gives too much weight to the opinions of those who speak early and assertively, causing others to line up behind them.Finally, Kahneman introduces a concept that he returns to many times in the book: WYSIATI.
Jumping to conclusions on the basis of limited evidence is so important to an understanding of intuitive thinking, and comes up so often in this book, that I will use a cumbersome abbreviation for it: WYSIATI, which stands for what you see is all there is.In other words, System 1 will totally react based on the only information available, even when that information has problems. This leads to several biases which are discussed in more detail later in the book: overconfidence, framing effects, and base-rate neglect.
In the chapter "How Judgments Happen", we learn that many basic assessments have been hardwired by evolution into System 1. One example of a judgement heuristic is how we make snap judgements of people based on their faces.
In about 70% of the races for senator, congressman, and governor, the election winner was the candidate whose face had earned a higher rating of competence.One place where System 1 really falls down is in doing integration or addition.
the effect of facial competence on voting is about three times larger for information-poor and TV-prone voters than for others who are better informed and watch less television.
Because System 1 represents categories by a prototype or a set of typical exemplars, it deals well with averages but poorly with sums. The size of the category, the number of instances it contains, tends to be ignored in judgments of what I will call sum-like variables.System 1 also has a way of making WYSIATI worse:
we often compute much more than we want or need. I call this excess computation the mental shotgun.Yet another flaw of System 1:
If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it. I call the operation of answering one question in place of another substitution.Hah, here's a fun fact. "Heuristic" comes from the same root as "eureka". Heuristics are what substitute the easy questions for the hard ones. Examples given are the mood heuristic for happiness, and the affect heuristic.
In Part II we look at more heuristics and learn how they become biases. For the rest of the book, one thing that becomes painfully apparent is that the human mind, particularly System 1, is incredibly bad at probability and statistics.
System 1 is inept when faced with “merely statistical” facts, which change the probability of outcomes but do not cause them to happen.One example of how bad our intuitive judgements of statistics comes from scientists themselves. Kahneman and Tversky found it was not at all unusual for scientists to use sample sizes for studies that were way too small. Intuitively, they seemed big enough, but, if you did the math, you found that random chance could easily overwhelm the statistical conclusions that were being made.
Reseachers who pick to small a sample leave themselves at the mercy of sampling luck.Kahneman refers to this as the law of small numbers.
[I have had personal experience with how little intuition we have about probability and randomness. 25 years ago, I wrote a complex system to assign candidates to examiners for an oral examination. My customer insisted that the assignment should be random. But random assignment produced clumps of good and bad candidates that would throw the examiners off. I finally convinced them that what they really wanted was balanced assignment, which did indeed greatly improve examiner performance.]
Random processes produce many sequences that convince people that the process is not random after all.Kahneman expands on the law of small numbers.
We are far too willing to reject the belief that much of what we see in life is random.
Another almost embarrassing bias is the anchoring effect.
- The exaggerated faith in small samples is only one example of a more general illusion — we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify. Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.
- Statistics produce many observations that appear to beg for causal explanations but do not lend themselves to such explanations. Many facts of the world are due to chance, including accidents of sampling. Causal explanations of chance events are inevitably wrong.
It occurs when people consider a particular value for an unknown quantity before estimating that quantity.It is a flavor of a priming effect.
If you are asked whether Gandhi was more than 114 years old when he died you will end up with a much higher estimate of his age at death than you would if the anchoring question referred to death at 35.Kahneman for the 1st time gives advice on now to combat this bias: get System 2 activated by actively pursuing arguments against the anchor value.
Next we have the availability heuristic.
We defined the availability heuristic as the process of judging frequency by “the ease with which instances come to mind.”We now learn more about the affect heuristic.
The availability heuristic, like other heuristics of judgment, substitutes one question for another: you wish to estimate the size of a category or the frequency of an event, but you report an impression of the ease with which instances come to mind. Substitution of questions inevitably produces systematic errors.
people make judgments and decisions by consulting their emotions: Do I like it? Do I hate it? How strongly do I feel about it?The availability heuristic combines with the affect heuristic to produce an availability cascade.
the importance of an idea is often judged by the fluency (and emotional charge) with which that idea comes to mind.I think we here in the US are now experiencing a potentially disastrous availability cascade: the presidential campaign of Donald Trump. The 1000s of hours of press coverage he has received has everyone's minds primed for more Trump news.
Another place where our ineptitude with probability shows is in our assessment of risks, referred to as probability neglect.
a basic limitation in the ability of our mind to deal with small risks: we either ignore them altogether or give them far too much weight — nothing in between.A particular flavor of our poor statistical intuition is base-rate neglect. In an experiment in the early 70s, Kahneman and Tversky found that, if asked what field an individual with a stereotypical geek description was likely to be studying, subjects would base their answer solely on the description, and ignore the statistics of what majors are most common. The "base-rate" percentage that should be the starting point for an estimate of likelihood was "neglected" completely. The representativeness heuristic was used instead - the geek description was just too good a match for System 1 to ignore. If the subjects were asked to frown, which engages System 2, they "did show some sensitivity to the base rates."
Another way to combat this bias, and 2 points to remember:
instructing people to “think like a statistician” enhanced the use of base-rate information, while the instruction to “think like a clinician” had the opposite effect.The fact that one has to come up with ways to try to engage System 2 is an example of the laziness of System 2.
[1.] base rates matter, even in the presence of evidence about the case at hand
[2.] intuitive impressions of the diagnosticity of evidence are often exagerated
I think I had heard of this one before, which goes beyond bias to just plain wrong thinking: the conjunction fallacy
which people commit when they judge a conjunction of two events (here, bank teller and feminist) to be more probable than one of the events (bank teller) in a direct comparison.So event A by itself has a probability of being true, the probability of event A and a 2nd event B being true has to be less than the probability of event A by itself. But if event B helps tell a story, or could be causally related to event A, or otherwise makes System 1 happy, System 1 completely ignores logic.
The uncritical substitution of plausibility for probability has pernicious effects on judgments when scenarios are used as tools of forecasting.I found this next statistical bias to be fascinating. The statistical concept of regression to the mean almost seems a little spooky. It simply says that, for values that cluster around an average, a high value will likely be followed by a lower value, and visa versa. System 1 completely ignores this law, preferring its many heuristics instead. Ha ha, I believe this anecdote about trials.
the statistician David Freedman used to say that if the topic of regression comes up in a criminal or civil trial, the side that must explain regression to the jury will lose the case. Why is it so hard? The main reason for the difficulty is a recurrent theme of this book: our mind is strongly biased toward causal explanations and does not deal well with “mere statistics.” When our attention is called to an event, associative memory will look for its cause — more precisely, activation will automatically spread to any cause that is already stored in memory. Causal explanations will be evoked when regression is detected, but they will be wrong because the truth is that regression to the mean has an explanation but does not have a cause.Knowing how our minds mishandle estimates by ignoring regression to the mean gives System 2 a tool to do a better job. But
Following our intuitions is more natural, and somehow more pleasant, than acting against them.So, despite all of the faults of System 1, it still does some pretty amazing work.
Furthermore, you should know that correcting your intuitions may complicate your life. A characteristic of unbiased predictions is that they permit the prediction of rare or extreme events only when the information is very good.
Part III is titled "Overconfidence". We put much more faith in our intuitions via System 1 than we should. But we would probably find life terrifying without some overconfidence. However, there several places in our society where this overconfidence is passed off as skill or wisdom.
1st we meet, via Nassim Taleb and "The Black Swan", the narrative fallacy
to describe how flawed stories of the past shape our views of the world and our expectations for the future.We are always looking for the narrative, for the story that makes sense of what is happening. Sometimes this leads us astray. Sometimes there really is no narrative.
The ultimate test of an explanation is whether it would have made the event predictable in advance.Our minds try to keep the story going in the past, as well as the present and the future. Again, we all overestimate how good we are at this.
The human mind does not deal well with nonevents.
Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle.
Your inability to reconstruct past beliefs will inevitably cause you to underestimate the extent to which you were surprised by past events. Baruch Fischhoff first demonstrated this “I-knew-it-all-along” effect, or hindsight biasEveryone's 20/20 hindsight leads to outcome bias.
Actions that seemed prudent in foresight can look irresponsibly negligent in hindsight.Kahneman does not seem to be a fan of business books.
The sense-making machinery of System 1 makes us see the world as more tidy, simple, predictable, and coherent than it really is. The illusion that one has understood the past feeds the further illusion that one can predict and control the future. These illusions are comforting. They reduce the anxiety that we would experience if we allowed ourselves to fully acknowledge the uncertainties of existence. We all have a need for the reassuring message that actions have appropriate consequences, and that success will reward wisdom and courage. Many business books are tailor-made to satisfy this need.Kahneman recounts how he came up with a rating methodology for army trainees. He felt good about it, it seemed logically consistent - and had pretty much 0 predictive value. But when the next group of trainees came through and he applied his methodology again, he felt just as good about it as before, even though he knew that statistically it had been proven worthless. He named this the illusion of validity.
Consumers have a hunger for a clear message about the determinants of success and failure in business, and they need stories that offer a sense of understanding, however illusory.
And even if you had perfect foreknowledge that a CEO has brilliant vision and extraordinary competence, you still would be unable to predict how the company will perform with much better accuracy than the flip of a coin.
He moves on to look at the predictive powers of stock market workers. As much as he is not a fan of business book authors, he is even less a fan of stock brokers.
my questions about the stock market have hardened into a larger puzzle: a major industry appears to be built largely on an illusion of skill.Next up for skewering is the pundit class, be they political or economical. Here's the results of a study of 284 pundits, asked to pick 1 of 3 possibilities for several near-future political events.
on average, the most active traders had the poorest results, while the investors who traded the least earned the highest returns.
men acted on their useless ideas significantly more often than women, and that as a result women achieved better investment results than men.
the key question is whether the information about the firm is already incorporated in the price of its stock. Traders apparently lack the skill to answer this crucial question, but they appear to be ignorant of their ignorance.
The experts performed worse than they would have if they had simply assigned equal probabilities to each of the three potential outcomes.We now meet Paul Meehl.
Those who know more forecast very slightly better than those who know less. But those with the most knowledge are often less reliable. The reason is that the person who acquires more knowledge develops an enhanced illusion of her skill and becomes unrealistically overconfident.
Meehl reviewed the results of 20 studies that had analyzed whether clinical predictions based on the subjective impressions of trained professionals were more accurate than statistical predictions made by combining a few scores or ratings according to a rule.I am totally onboard with these conclusions. I have been saying for decades how I would trust an AI to diagnose an illness more than a human physician. And I believe that studies in medical areas have shown that far better results are obtained via working from a checklist rather than relying on a physician's intuition. The TV show "House" always cracked me up. Every episode the brilliant clinician House went through 5-10 wrong diagnoses before finally stumbling onto the right one.
The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between algorithms and humans has not changed. About 60% of the studies have shown significantly better accuracy for the algorithms. The other comparisons scored a draw in accuracy, but a tie is tantamount to a win for the statistical rules, which are normally much less expensive to use than expert judgment. No exception has been convincingly documented.
In every case, the accuracy of experts was matched or exceeded by a simple algorithm.
Why are experts inferior to algorithms? One reason, which Meehl suspected, is that experts try to be clever, think outside the box, and consider complex combinations of features in making their predictions. Complexity may work in the odd case, but more often than not it reduces validity. Simple combinations of features are better.Even when using formulas, simpler is better. From the work of Robyn Dawes:
The dominant statistical practice in the social sciences is to assign weights to the different predictors by following an algorithm, called multiple regression, that is now built into conventional software. The logic of multiple regression is unassailable: it finds the optimal formula for putting together a weighted combination of the predictors. However, Dawes observed that the complex statistical algorithm adds little or no value.But, most people still prefer experts to algorithms. I think this will change, as we realize that our new robot overlords only want the best for us.
formulas that assign equal weights to all the predictors are often superior, because they are not affected by accidents of sampling.
The story of a child dying because an algorithm made a mistake is more poignant than the story of the same tragedy occurring as a result of human error, and the difference in emotional intensity is readily translated into a moral preference.I like that, after showing us the shortcomings of intuition, Kahneman gives us techniques for harnessing our intuition, by combining it with the hard data.
intuition adds value even in the justly derided selection interview, but only after a disciplined collection of objective information and disciplined scoring of separate traits.So, can we ever trust the intuitions of a skilled expert? Kahneman says yes, depending on the skill.
The answer comes from the two basic conditions for acquiring a skill:But we should also note the converse of this answer:
When both these conditions are satisfied, intuitions are likely to be skilled.
- an environment that is sufficiently regular to be predictable
- an opportunity to learn these regularities through prolonged practice
intuition cannot be trusted in the absence of stable regularities in the environment.Kahneman recounts another personal anecdote introducing the outside view vs the inside view. The outside or objective view of, say, a project's success can differ significantly from the inside or subjective view of those involved.
Whether professionals have a chance to develop intuitive expertise depends essentially on the quality and speed of feedback, as well as on sufficient opportunity to practice.
Amos and I coined the term planning fallacy to describe plans and forecasts that"Everybody else failed at this, but we're going to make it work! This time for sure!" But quite often we don't even bother to look at what everybody else has done.
- are unrealistically close to best-case scenarios
- could be improved by consulting the statistics of similar cases
people who have information about an individual case rarely feel the need to know the statistics of the class to which the case belongs.The treatment for the planning fallacy is called reference class forecasting.
The optimistic bias, which Kahneman says "may well be the most significant of the cognitive biases", is part of what creates the planning fallacy. Chapter 24 is titled "The Engine of Capitalism" - probably everyone who creates a startup or small business does so with help from the optimistic bias.
The optimistic risk taking of entrepreneurs surely contributes to the economic dynamism of a capitalistic society, even if most risk takers end up disappointed.So where do entrepreneurs go wrong? What cognitive biases undermine their efforts?
Also contributing is the above-average effect:
- We focus on our goal, anchor on our plan, and neglect relevant base rates, exposing ourselves to the planning fallacy.
- We focus on what we want to do and can do, neglecting the plans and skills of others.
- Both in explaining the past and in predicting the future, we focus on the causal role of skill and neglect the role of luck. We are therefore prone to an illusion of control.
- We focus on what we know and neglect what we do not know, which makes us overly confident in our beliefs.
"90% of drivers believe they are better than average"Here's a good line about one of those skilled experts we've talked about which I hadn't heard before:
people tend to be overly optimistic about their relative standing on any activity in which they do moderately well.
President Truman famously asked for a “one-armed economist” who would take a clear stand; he was sick and tired of economists who kept saying, “On the other hand…”Some more examples of optimistic bias:
inadequate appreciation of the uncertainty of the environment inevitably leads economic agents to take risks they should avoid.Kahneman again provides us with a tool to use against these biases. In this case, it is the premortem, from Kahneman's "adversarial collaborator" Gary Klein:
“clinicians who were ‘completely certain’ of the diagnosis antemortem were wrong 40% of the time.”
“Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.”
Part IV begins with being reintroduced to Richard Thaler's Humans vs Econs. The definition of an Econ, from Bruno Frey:
“The agent of economic theory is rational, selfish, and his tastes do not change.”Meanwhile we all know that Humans are nothing like Econs.
To a psychologist, it is self-evident that people are neither fully rational nor completely selfish, and that their tastes are anything but stable.In this part we start to get overlap with Thaler's "Misbehaving" book blogged here. I will move quickly through the overlapping material.
The rational agent model is founded on expected utility theory. Kahneman and Tversky expanded on expected utility theory with their prospect theory, which occupied most of Chapter 5 of Thaler's "Misbehaving". The roots of these theories go back to Daniel Bernoulli in the 18th century.
Kahneman refreshes prospect theory with the ideas of this book.
it’s clear now that there are three cognitive features at the heart of prospect theory. They play an essential role in the evaluation of financial outcomes and are common to many automatic processes of perception, judgment, and emotion. They should be seen as operating characteristics of System 1.We revisit the endowment effect, which was chapter 2 of "Misbehaving". I characterized it there as "a bird in the hand is worth 2 in the bush".
- Evaluation is relative to a neutral reference point, which is sometimes referred to as an “adaptation level.” ...
- A principle of diminishing sensitivity applies to both sensory dimensions and the evaluation of changes of wealth. ...
- The third principle is loss aversion. ... The "loss aversion ratio" has been estimated in several experiments and is usually in the range of 1.5 to 2.5.
We learn more about loss aversion. In "Misbehaving" this is called "risk aversion". This is one of those things that evolution as "survival of the fittest" explains perfectly.
The brains of humans and other animals contain a mechanism that is designed to give priority to bad news. By shaving a few hundredths of a second from the time needed to detect a predator, this circuit improves the animal’s odds of living long enough to reproduce. The automatic operations of System 1 reflect this evolutionary history. No comparably rapid mechanism for recognizing good news has been detected. Of course, we and our animal cousins are quickly alerted to signs of opportunities to mate or to feed, and advertisers design billboards accordingly. Still, threats are privileged above opportunities, as they should be.Kahneman agrees with that this is the root of the impulse to conservatism.
Loss aversion is a powerful conservative force that favors minimal changes from the status quo in the lives of both institutions and individuals.Loss aversion shows up in law and business as findings that people are most incensed when companies "break informal contracts with workers or customers". I know I have personally annoyed when a software product drops support for a feature I liked.
Loss aversion interacts with our poor probability judgement skill, and seems to only be overcome when we are desperate. We seem to be pretty good at understanding "nothing", and also "all", as in "all or nothing". But add a small possibility to "nothing", and we get the possibility effect, "which causes highly unlikely outcomes to be weighted disproportionately more than they “deserve.”". The same thing happens at the "all" end via the certainty effect.
Outcomes that are almost certain are given less weight than their probability justifies.So the expectation principle, which goes back to Bernoulli, "by which values are weighted by their probability, is poor psychology."
Quantifying these effects, Kahneman and Tversky "carried out a study in which we measured the decision weights that explained people's preferences for gambles with modest monetary stakes." Here are their numbers:
Next, we get the fourfold pattern, shown below.
The top left cell was discussed by Bernoulli:
people are averse to risk when they consider prospects with a substantial chance to achieve a large gain. They are willing to accept less than the expected value of a gamble to lock in a sure gain.The top right cell they found surprising, and a source of new insights.
... the bottom left cell explains why lotteries are popular.
... The bottom right cell is where insurance is bought.
we were just as risk seeking in the domain of losses as were risk adverse in the domain of gains.Despite its evolutionary value, loss aversion now works against us.
Many unfortunate human situations unfold in the top right cell. This is where people who face very bad options take desperate gambles, accepting a high probability of making things worse in exchange for a small hope of avoiding a large loss. Risk taking of this kind often turns manageable failures into disasters.
[this is] how terrorism works and why it is so effective: it induces an availability cascade.Another cognitive bias is referred to as the denominator neglect. Given 1 winning chance in 10 or 8 in 100, most people will pick 8 in 100 - more chances to win! They ignore the denominator, which gives the 2nd choice an 8% probability of success vs 10% for the 1st choice.
Here's another one that is almost hard to believe, but it has been born out by studies. People will be more impacted by "10 out of 100" than "10%. We can add percentages to probability and statistics as things that the our mind, particularly System 1, does not handle well.
The discrepancy between probability and decision weight shown above in Table 4 is characterized as
This gives us 2 more cognitive biases, overestimation and overweighting.
- People overestimate the probabilities of unlikely events.
- People overweight unlikely events in their decisions.
The successful execution of a plan is specific and easy to imagine when one tries to forecast the outcome of a project. In contrast, the alternative of failure is diffuse, because there are innumerable ways for things to go wrong. Entrepreneurs and the investors who evaluate their prospects are prone both to overestimate their chances and to overweight their estimates.The end of the discussion of rare events concludes with a scary sentence, particularly when you think about how successful we will be in addressing the climate crisis.
Ralph Hertwig and Ido Erev note that “chances of rare events (such as the burst of housing bubbles) receive less impact than they deserve according to their objective probabilities.
Obsessive concerns ..., vivid images ..., concrete representations ..., and explicit reminders ... all contribute to overweighting. And when there is no overweighting, there will be neglect. When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that may be exposed to events no one has yet experienced, this is not good news.We are given more evidence of the mind's lack of skill at integrating or summing in a discussion of narrow framing - making lots of small decisions - vs broad framing - coming up with a comprehensive plan. Econs would use broad framing most of the time, but real Humans will almost always use narrow framing.
Kahneman gives us another tool to use against this bias: risk policies.
Decision makers who are prone to narrow framing construct a preference every time they face a risky choice. They would do better by having a risk policy that they routinely apply whenever a relevant problem arises.Next we have a discussion of mental accounts - these were discussed in Chapter 11 of "Misbehaving".
mental accounts are a form of narrow framing; they keep things under control and manageable by a finite mind.Our mental accounts, where we track the value and cost of various aspects of our life, are of course prone to cognitive biases.
finance research has documented a massive preference for selling winners rather than losers — a bias that has been given an opaque label: the disposition effect ... - an instance of narrow framing.
So you sell a winner, you feel good about yourself - I'm a winner! But when you sell a loser, you are admitting a failure and taking a realized loss right then and there. Rather than doing that, most people will keep the loser - an instance of the sunk-cost fallacy, discussed in Chapter 8 of "Misbehaving". Speaking of a corporate manager who refuses to give up on a losing project, Kahneman states:
In the presence of sunk costs, the manager’s incentives are misaligned with the objectives of the firm and its shareholders, a familiar type of what is known as the agency problem.On a different note, Kahneman discusses the emotion of regret:
The fear of regret is a factor in many of the decisions that people make ...Kahneman references the claims of Daniel Gilbert on regret - more advice for us in dealing with our biases.
you should not put too much weight on regret; even if you have some, it will hurt less than you now think.2 more biases. The 1st deals with "sins of commission" vs "sins of omission". Logically, if the (bad) outcome is the same, what difference does it make if it came about through action rather than inaction? Maybe it gets back to the "first do no harm" principle of medicine, which is yet another flavor of loss aversion.
people expect to have stronger emotional reactions (including regret) to an outcome that is produced by action than to the same outcome when it is produced by inaction.The 2nd bias is the "taboo tradeoff against accepting any increase in risk". Particularly where children are involved, we don't do a good job evaluating how to address risk in the face of limited resources. We refuse any increase in risk.
Preference reversals are discussed in chapter 6 of "Misbehaving". People will reverse their choices illogically depending on how they are framed. It frequently shows up when options are weighed separately rather than together. Christopher Hsee addressed this with his evaluability hypothesis. Kahneman reminds us again that
rationality is generally served by broader and more comprehensive frames.
Broader frames and inclusive accounts generally lead to more rational decisions.
Part V is titled "Two Selves". We are introduced to another dichotomy to go along with System 1/System 2 and Econs/Humans: experienced utility and the experiencing self vs decision utility and the remembering self.
The experiencing self is the one that answers the question: “Does it hurt now?” The remembering self is the one that answers the question: “How was it, on the whole?”Kahneman describes an experiment what produced results that seem to totally fly in the face of common sense. Subjects were subjected to 2 slightly painful experiences: having a hand placed in cold water for 60 seconds; and having a hand placed in cold water for 60 seconds, followed by slightly warming the water for 30 seconds. When subjects were then asked which experience they would choose for a repeat performance, they chose the 2nd! They chose more pain rather than less! This shows the dominance of the remembering self, and yet another instance of our minds' inability to integrate or sum: "System 1 represents sets by averages, norms, and prototypes, not by sums.". We get 2 more components of our mental makeup:
Confusing experience with the memory of it is a compelling cognitive illusion
What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self.
Interestingly, these 2 components also work the same for life as a whole.
- Peak-end rule: The global retrospective rating was well predicted by the average of the level of pain reported at the worst moment of the experience and at its end.
- Duration neglect: The duration of the procedure had no effect whatsoever on the ratings of total pain.
A story is about significant events and memorable moments, not about time passing. Duration neglect is normal in a story, and the ending often defines its character.I'm not sure I agree with Kahneman's last conclusion on this subject. These 2 selves could maybe be characterized by the Zen self vs the non-Zen self. Maybe a Zen master who can really live totally in the here and now contradicts this conclusion. But, I have no knowledge that such a Zen master exists.
In intuitive evaluation of entire lives as well as brief episodes, peaks and ends matter but duration does not.
Odd as it may seem, I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.Talking about one's experience of life as a whole, Kahneman introduces us to Mihaly Csikszentmihalyi and his concept of flow - a state that some artists experience in their creative moments and that many other people achieve when enthralled by a film, a book, or a crossword puzzle. Hah, I knew I recognized that name, I mention having read his book "Creativity" in my 3rd blog post, May 6, 2003. Kahneman mentions the studies on overall well-being vs monetary status.
The satiation level beyond which experienced well-being no longer increases was a household income of about $75,000 in high-cost areas (it could be less in areas where the cost of living is lower). The average increase of experienced well-being associated with incomes beyond that level was precisely zero.My wife doesn't read this blog, so I think it's safe to include this here.
the decision to get married reflects, for many people, a massive error of affective forecasting.Given what we have learned about our minds' inability to sum over time, it is not surprising that we are not very good at determining what will make us happy, past, present, or future. When people are asked how life is going now, the mood heuristic is ready to jump in and report whatever their current mood is.
Daniel Gilbert and Timothy Wilson introduced the word miswanting to describe bad choices that arise from errors of affective forecasting.
We now get our final cognitive bias:
the focusing illusion, which can be described in a single sentence:One final excerpt that I thought was interesting, on how we acclimate to novelty:Nothing in life is as important as you think it is when you are thinking about it....
The essence of the focusing illusion is WYSIATI, giving too much weight to the climate [of CA vs the midwest, in an experiment], too little to all the other determinants of well-being.
The mistake that people make in the focusing illusion involves attention to selected moments and neglect of what happens at other times. The mind is good with stories, but it does not appear to be well designed for the processing of time.
over time, with few exceptions, attention is withdrawn from a new situation as it becomes more familiar. The main exceptions are chronic pain, constant exposure to loud noise, and severe depression.
This was a very enjoyable and informative book. System 1, with its heuristic-based intuitions providing us with the snap judgements that help keep us alive, is a marvel of evolution. Of course, every part of life is a "marvel of evolution". But, we now have a laundry list of things System 1 is NOT good at: percentages, probability, statistics, integration, summation, broad framing.
With regard to economics, we again get the message that "Misbehaving" delivered: that Econs are a complete and utter fiction. I think in "Misbehaving" a statement is made to the effect that "Yeah, there are no Econs, but without them we couldn't have developed economics". So Econs are a simplifying assumption. I continue to have the nagging thought that it is too big of a simplification, such that I question the entire framework of economics.
One of the "killer apps" for economics is predicting boom and bust cycles. I was around for the dotcom bubble in the late 90s, the housing bubble of the mid 00s, and the Web 2.0 (social media) bubble, which is still active. In all 3 of these, I observed the same characteristics. Here are examples of people's (including my) thinking during these bubbles:
- This is a sure thing.
- Everybody else is making big $$$.
- I'm going to miss out.
- I better pile on.
Another place where I think herd mentality could be the secret sauce is predicting demand. Fashion, fads, and "keeping up with the Joneses", expertly manipulated by the marketing machine, all incorporate herd behavior.
It's funny, my hero (before he became such a Hillary shill) Paul Krugman says he wanted to work on psychohistory like Harry Seldon in Asimov's "Foundation" novels, and economics is what is closest. But psychohistory was all about predicting herd behavior. So where is it really, in modern economics? I guess I'll have to check out that paper.
A final personal note, I find it interesting that I used to do a lot more reading in cognitive science, and found cognitive illusions particularly interesting. I start studying (slowly) economics, and find, in behavioral economics, the integration of those concepts. In another life behavioral economics could be a good field for me.