Wednesday, June 15, 2016

Jubilee

I really like what John Oliver recently did.


They set up a debt-collection company and purchased $15M of medical debt for $60K - 0.4¢ purchases $1 of debt - less than a penny on a dollar. They then transferred the debt to RIP Medical Debt for forgiveness. I donated $100 to RIP Medical Debt - at 250x, that will forgive $25K in debt. Nice!

This struck me as close to being a jubilee! Don't know why, I love that word and concept. Every 50 years, you have

a special year of remission of sins and universal pardon. In the Book of Leviticus, a Jubilee year (Hebrew: יובל‎‎ yūḇāl) is mentioned to occur every fiftieth year, during which slaves and prisoners would be freed, debts would be forgiven and the mercies of God would be particularly manifest.
Ah yes, manifest mercies. Nice! I tweet #jubilee whenever I get a chance - definitely not trending, oops.

So is the concept of jubilee part of a post-scarcity utopia (PSU)? No, it's not. It is a palliative in our current system on the financialization of the working class, as described by Paul Mason in his recent book "Postcapitalism", blogged here. Late in a business cycle capitalism is mostly extractive, as opposed to early in the business cycle, when it is mostly creative. A capitalist who might have used capital to build a factory to produce something, creating jobs, now opens a bank or, better yet, a payday loan company, and through initiation fees, late fees, all kinds of fees, and interest on their users, can get a better ROI on their $ than from the factory route. Sad!

Interesting, the Hebrew version of jubilee was originally 7 x 7 = 49 years, that got rounded up to 50?!?

Discussing jubilee with The Google recently, I came across a reference to the Rolling Jubilee that was an offshoot of Occupy Wall Street. Here's an article on them. They are focusing on student loan debt.

Then my daughter Erica, who worked with Occupy, pointed me at her tweet:

Aw, sad that @LastWeekTonight didn't acknowledge @StrikeDebt's influence and help on the debt buying episode.
Strike Debt, another Occupy offshoot working on debt forgiveness (or debt resistance), apparently walked the Last Week Tonight staff through the whole process of creating the debt-collection company, etc. But at the last minute, HBO decided that it would be ... too partisan and/or dividing to give the props to Occupy? Who exactly would be offended by mentioning Occupy? This seems like a totally unnecessary and chickenshit decision by HBO.

RIP Medical Debt focused on medical debt because there would be no question of "does the person deserve the help?" You know, worrying about that question wastes so many $$$. Studies show that social programs have extremely low, 1-2%, fraud rates. But, more people would take issue with, say, forgiving gambling debts. Rolling Jubilee focusing on student loans is probably felt to be laudable by most people. But, credit card debt, other?

All these issues would be so much more easy to address if we got rid of all the moral judgements. Yes, the bible sez "By the sweat of your brow you will eat your food until you return to the ground". Like a lot of things in the bible, maybe a good idea 4000 years ago, completely inapplicable now. What was the point of all that automation otherwise?

The US is the only country in the developed world where anyone has medical debt, so with a decent healthcare system, there would be no medical debt. The same is true for student loan debt. So get rid of those 2, work on everything else. Don't worry about what kind of debt it is - don't needlessly complicate the system.

What are we talking here, total? US consumer debt is $12T. Nice, here's a great source of data! Only $455 billion is seriously delinquent. Divided by 250 = $1.8B - chump change for Bill Gates & his $B buddies.

One downside of this is that, as debt gets bought up, supply and demand will make its cost rise. Well, maybe by then, Elizabeth Warren and the Consumer Finance Protection Bureau will have reined in the worst of the payday loan company abuses.

So, a jubilee is a palliative. I don't care, I still love a jubilee!

Tuesday, June 14, 2016

Post-Scarcity Utopia

What do I mean by post-scarcity utopia (PSU)? It is tagged in this blog as "economy of plenty". The recent post on The Plan referenced universal basic income as one part of a PSU. What other components are there?
  • Universal health care, with $0 copay, deductible, anything.
  • Universal free education, to whatever level is attainable by the student, with particular emphasis on early childhood education. This should include trades and crafts.
  • Universal access to capital for all forms of entrepreneurs. Kickstarter et al somewhat do this now. But in a PSU, the system would assure that ideas with enough popular support always got funding.
With regard to defining PSU, I think of the definition of the Good Life, based on Basic Goods, from "How Much Is Enough? Money and the Good Life", by Robert and Edward Skidelsky (British, an economist and a philosopher), blogged here. They define the following 4 requirements for a Basic Good: it must be universal; final; sui generis (unique); and indespensible. They list 7 Basic Goods: health; security; respect; personality; harmony with nature; friendship; and leisure. See the post for more info.

I was wondering if novelty was a Basic Good. Maybe it's not universal? No, we are all curious monkeys. Also, where does being a Citizen Scientist fit? Both of these probably span personality, harmony with nature, and leisure.

Going back to The Plan. The Fed is doing basically nothing - we are in a liquidity trap, monetary policy can do nothing. The worst do-nothing Congress in 100 years is not going to use fiscal stimulus. So the 2 tools of Keynesian economics are off the table. Time for something new!

Keynes talked of the end of capitalism. Capitalism would have done it's job. We would have all the capital we need to accomplish world prosperity. I am wondering the following:

Does "we have all the capital we need" mean that we can create enough (helicopter) money to fix the world's inequality problems without kicking off inflation? That the current $250T of capital in the world will dwarf the money we add?
If you use this approach, no tax reform! Let the poor catch up a little with new money, while seeing if The Fed can meet its 2% inflation goal.

Thursday, June 09, 2016

The Plan

After I retired close to 4 years ago, I decided to study (slowly) economics. I probably should have taken an online course of "Introduction to Macroeconomics", rather than reading Adam Smith "The Wealth of Nations", Keynes "General Theory", Piketty "Capital in the 21st Century", etc. I downloaded the MIT OCW open-source courseware & textbook and never did anything with it. I bought a used hardcopy of Samuelson, ditto. I think I have a basic understanding of most of the concepts, but no skills with any of the tools of economists.

The reason I decided to study economics was to figure out, how can we live in a post-scarcity utopia? How can we live in an Iain M. Banks Culture novel? Goddam it, I want my grandchildren (currently 2.7), and everybody's grandchildren, to live in a bright shiny future, not the past, and not Hunger Games!

My 40 years as a software developer/architect/manager/executive/geek left me with the certainty, deep down in my gut, that money === software. So, we just have to figure out how to tweak the system parameters to get us where we want to be. Yes, the more compulsive/sociopathic of us want to rack up logarithmically larger scores than most of us, but, no worries, just pay your taxes, it's all good!

I have not found where or how to do economic modeling, it's still a WIP.

Meanwhile, nevertheless, here is My Plan The Plan to transition the world to a post-scarcity utopia, postcapitalism, Star Trek economy. Ha ha, or more accurately, to Universal Basic Income (UBI), generally considered a good 1st step.

  1. The Fed has been way below the 2% inflation target for years. The 2008 recovery programs including QE increased the money supply by 3x. Conservative economics have ever since then been howling "Hyperinflation tomorrow" - but meanwhile, back in reality, deflation is the real problem.
  2. We are currently in a demand subrecession (not quite a recession). Of course, according to conservative economists, demand recessions are impossible because of Say's Law. My opinion of the root cause of this is, the gouging of middle class wages and the middle class in general, beginning with Reagan/Thatcher in 1980.
  3. So, let's increase demand via helicopter money. Helicopter money has been lately appearing in the economics blogs 1/week at least. The Fed increases the money supply again, but this time, instead of giving the money to banks, who largely just sat on it, we give it to each and every US citizen, regardless of current wealth.
  4. When Bernie proposed free college for all, people griped, "I don't want to be paying for {random oligarch}'s child's college". It's a dumb argument. Applying the program to the 1% increases the cost of the program by 1%. Creating a bureaucracy to means test and administer eligibility concerns increases the cost by who knows how many %?
  5. The helicopter money payment is $1-3K/month. Say we start at $2K/month. The Fed then monitors the inflation rate, with a target of 2% +/- 0.1%. 1.9-2.1%. If inflation hits 2.1% then decrease the helicopter money by 1/4, 1/2, 3/4 per month, whatever algorithm is deemed best.
  6. If inflation stays below 1.9%, increase the helicopter money by an appropriate algorithm.
  7. As that stabilizes, add a 2nd system goal besides 2% inflation: 3% +/- 0.1 growth. That would mean 1% net growth. Maybe, as we get a couple decades down the road, we could look at tweaking that upwards. But meanwhile, let's control growth, to try to let the planet cope with the climate crisis, hopefully with our help.
  8. So if inflation is < 2.1% and GDP growth is < 2.9% the helicopter money goes up, etc.
  9. Given that the system stabilizes, you then rename it from "helicopter money" to "Universal Basic Income (UBI)". (So, UBI == open-ended helicopter money?)
  10. OH NO HOW DO WE PAY FOR IT??? Well, if you feel compelled to pay for it, then, let's return to that decade most loved by conservatives - the 50s - and restore top income tax rates of 92%, and have corporations paying 30% of the nation's taxes instead of 6%.
  11. But, the totally more enlightened & elegant solution is, don't pay for it! Just print the money! Call it seigniorage if you need to, but you don't have to call it anything. The $ is the de facto currency of the world (for maybe another decade or so), if we charge a 2% or 0.2% or 0.02% seigniorage fee on every $ we create, who in the world is going to complain?
  12. Didn't the latest budget plan (no idea where it is in the morass that is our do-nothing Congress) include the Fed creating money from nothing to fund infrastructure? So the precedent may have already been set.
  13. Please correct me if I am wrong: I believe the only problem anyone talks about with helicopter money is inflation. So if we build a system with a helicopter money / inflation feedback loop, how can inflation hurt us?
  14. Problems. The greatest tragedy in economic thinking of the 20th century was when Milton Friedman's prediction of stagflation following the Arab Oil Embargo of 1974 gave his theories credence, leading to Reagan/Thatcher, supply-side economics, austerity, and the rest of the clueless conservative economic crap.
  15. The system sustained a huge 1-time shock: oil prices go up 3x due to the embargo. Inflation snake just swallowed a watermelon and did not deal with it well.
  16. So I guess the point re problems is, if some crazy weird crap happens, all predictions/bets are off. But when is that not true?
Note, we are retasking the Fed from its normal jobs of controlling inflation & producing full employment. They have raised interest rates 1 time in the last 8 years! So their current tasks seem to be no-ops. I think this is a sign of being in the liquidity trap. And, as our dysfunctional Congress will not ever carry out the 2nd tool of Keynesian economics, fiscal stimulus, let's give the Fed something to do to get us out of secular stagnation!

Where to implement The Plan? There are 3 requirements for a country to implement The Plan:

  1. Must be in a demand (sub)recession.
  2. Must be in the liquidity trap.
  3. Must have its own currency.
The country that screams for this is Japan. They have been in the liquidity trap for over 3 decades. Abenomics appears to be a failure. Try The Plan! My concern tho, is that I'm not sure that the liquidity trap is Japan's real problem. Their horrible demographics, coupled with their aversion to immigrants, might be their real problem. The question is, can Japan's condition be characterized as a demand (sub)recession? Do they meet requirement #1?

Meanwhile, the EuroZone is pretty much screwed. Until they add EuroBonds and EuroTaxes to the Euro and create a complete Euro-based economic system, I don't see how they can do much. Meanwhile, Greece in particular continues to get screwed.

But, the EU countries that are not in the EuroZone (not on the euro) potentially meet the requirements. 9 countries (Bulgaria, Croatia, Czech Republic, Denmark, Hungary, Poland, Romania, Sweden, and the United Kingdom) are EU members but do not use the euro. The more recent EU entrants like Croatia have been totally smart and kept their own currencies (kunas!). They've joined the EU but not the EuroZone. At this point, joining the EuroZone is like inviting Germany to have its way with you - see, "Greece". :-(

Switzerland is not in the EU or the EEA (European Economic Area) & still has its own currency. Switzerland voted down UBI 77-23 on June 5. Maybe their economic slump is not as much as others? A country as prosperous and egalitarian as Switzerland is not the best 1st place for UBI.

Australia also meets the requirements.

The US meets the requirements. How can we make this happen?

Any country who meets the 3 requirements above that is not the US - owner of the world's default currency - has a additional implementation detail added to their system: the helicopter money/inflation feedback loop must also include the valuation of their currency against the standard currencies - $, euro, yen, (yuan) - in the feedback loop.

I would love to see The Plan tried in African, Asian, and South American countries. But, I have no idea where they are on the above 3+ requirements. Basic income trials in a few African countries have been very successful.

Wow, easy-peasy, yes?

On a personal note, my wife & I visited Croatia 2x, while our youngest daughter & her husband were living and teaching in Zagreb for 3 years. My grandson was born there. We visited 4-5 other places in Croatia, it is a beautiful country! And that comes from someone who lives in the most beautiful state in the US, KY!

And, I will note that, in what may be a glimmer of professionalism, I sent this post off to my friend since college Charles G. St. Pierre aka Greg, who blogs as Another Accidental Economist, for review. Thanks, Greg, for your help.

Thursday, June 02, 2016

Thinking, Fast And Slow

"Thinking, Fast and Slow" by Daniel Kahneman came out in 2011. It summarizes the work he and his longtime, now deceased collaborator Amos Tversky have done on cognitive biases and decision theory. The book is dedicated "In memory of Amos Tversky".

We 1st met these 2 scholars in the book "Misbehaving" by Richard Thaler, with whom they worked for a while. I blogged about that book November 15, 2015. I would recommend reading that post first, as in this post I give short shrift to the topics already covered in the earlier post. The 2 works together give a big picture of behavorial economics, with "Misbehaving" coming in from the economics side, and "Thinking, Fast And Slow" coming in from the psychological side.

The book is 512 pages long. It has an Introduction, a Conclusion, and 38 chapters in 5 parts. It also contains reprints of 2 of their groundbreaking papers:

  1. "Judgement Under Uncertainty: Heuristics and Biases"
  2. "Choices, Values, and Frames"
The 5 parts are:
  • Part I. Two Systems - 9 chapters
  • Part II. Heuristics and Biases - 9 chapters
  • Part III. Overconfidence - 6 chapters
  • Part IV. Choices - 10 chapters
  • Part V. Two Selves - 4 (short) chapters
The book is easy to read. I liked how every chapter ends with a section "Speaking Of {the title/topic of the chapter}", which contains 4-6 simple declarative statements that summarize the ideas of the chapter. There are also many descriptions of simple experiments and exercises that quickly show you what he is talking about.

Kahneman states his purpose in the Introduction:

improve the ability to identify and understand errors of judgment and choice, in others and eventually in ourselves, by providing a richer and more precise language to discuss them.

...

My main aim here is to present a view of how the mind works that draws on recent developments in cognitive and social psychology. One of the more important developments is that we now understand the marvels as well as the flaws of intuitive thought.

Some previews of what we will be getting into:
This is the essence of intuitive heuristics: when faced with a difficult question, we often answer an easier one instead, usually without noticing the substitution.

...

our excessive confidence in what we believe we know, and our apparent inability to acknowledge the full extent of our ignorance and the uncertainty of the world we live in. We are prone to overestimate how much we understand about the world and to underestimate the role of chance in events. Overconfidence is fed by the illusory certainty of hindsight.

...

the unfortunate tendency to treat problems in isolation, and with framing effects, where decisions are shaped by inconsequential features of choice problems.

This next statement seems a little glib, maybe, but it makes sense.
The premise of this book is that it is easier to recognize other people's mistakes than our own.


Part I is spent exploring the 2 types of thinking named in the title. "Thinking, Fast" he refers to as "System 1", "Thinking, Slow" he refers to as "System 2".

  • System 1 operates automatically and quickly, with little or no effort and no sense of voluntary control.
  • System 2 allocates attention to the effortful mental activities that demand it, including complex computations. The operations of System 2 are often associated with the subjective experience of agency, choice, and concentration.
System 1 is our fabulous subconscious intuition, which generally operates in a fraction of a second. But, to operate that quickly, it makes simplifying assumptions and uses heuristics which apparently had evolutionary value, but which are subject to numerous sources of error.

System 2 is our conscious mind, with its powers of reasoning, calculation, and concentration. It also is prone to errors, perhaps the worst of which is that it is lazy, and doesn't question System 1's snap judgements often enough.

One way you can tell when System 2 is engaged is by measuring the dilation of the pupils. When you concentrate your pupils dilate. Who knew?

the psychologist Eckhard Hess described the pupil of the eye as a window to the soul.

...

[Hess's article] ends with two striking pictures of the same good-looking woman, who somehow appears much more attractive in one than in the other. There is only one difference: the pupils of the eyes appear dilated in the attractive picture and constricted in the other.

System 2 has limited capabilities: apparently you can't use System 2 and walk at the same time:
While walking comfortably with a friend, ask him to compute 23 × 78 in his head, and to do so immediately. He will almost certainly stop in his tracks.
Another thing about the type of activities performed by System 2 is that using will power engages it and burns mental energy.
both self-control and cognitive effort are forms of mental work. Several psychological studies have shown that people who are simultaneously challenged by a demanding cognitive task and by a temptation are more likely to yield to the temptation.

...

The phenomenon has been named ego depletion.

One of System 1's skills is association, as in free association. Association is subject to the priming effect - if you have been talking about eating, you will pick out words for food rather than others; if you have been shown words relating to old age, you will complete a task more slowly, like an old person. The 2nd instance is known as the ideomotor effect.

Reciprocal priming effects also exist. Plaster a smile on your face, you will be happier. Nod your head while listening to a message, you will tend to agree with it; shake your head, and the opposite occurs.

You can see why the common admonition to “act calm and kind regardless of how you feel” is very good advice: you are likely to be rewarded by actually feeling calm and kind.
2 more interesting examples of priming. The 1st example involves money, which is very important in our culture, but perhaps not so much so in other societies.
money primes individualism: a reluctance to be involved with others, to depend on others, or to accept demands from others.
The 2nd example is that after being asked to do something bad, people are primed to want to clean themselves.
Feeling that one’s soul is stained appears to trigger a desire to cleanse one’s body, an impulse that has been dubbed the “Lady Macbeth effect.”
Kahneman talks about cognitive ease vs cognitive strain. System 1 is usually very happy to take the easy path. Some of the ways to create cognitive ease make you almost embarrassed for our minds' foolishness. Given 2 written answers, people who don't know the correct one will choose an answer in bold font over one in normal font.

Manipulating System 1 by creating cognitive ease can create truth illusions. The role that "repeated experience" plays in creating cognitive ease is known as the exposure effect.

the mere exposure effect is actually stronger for stimuli that the individual never consciously sees.
The familiar makes us comfortable; the unusual makes us wary - the root of the urge to conservatism.
Survival prospects are poor for an animal that is not suspicious of novelty.
In summary:
good mood, intuition, creativity, gullibility, and increased reliance on System 1 form a cluster. At the other pole, sadness, vigilance, suspicion, an analytic approach, and increased effort also go together.
Further exploring System 1, Kahneman posits that it makes heavy use of norms.
We have norms for a vast number of categories, and these norms provide the background for the immediate detection of anomalies such as pregnant men and tattooed aristocrats.
System 1 also likes to believe that things happen for a reason. It creates "the perception of intentional causality". It is also "a machine for jumping to conclusions".
When uncertain, System 1 bets on an answer, and the bets are guided by experience. The rules of the betting are intelligent: recent events and the current context have the most weight in determining an interpretation.
System 2 is biased to go with the flow, to believe and confirm, rather than to unbelieve. And, "when System 2 is otherwise engaged, we will believe almost anything". Interesting work by psychologist Daniel Gilbert:
Gilbert proposed that understanding a statement must begin with an attempt to believe it: you must first know what the idea would mean if it were true. Only then can you decide whether or not to unbelieve it.
Another flaw of System 1 is confirmation bias.
The confirmatory bias of System 1 favors uncritical acceptance of suggestions and exaggeration of the likelihood of extreme and improbable events.
Then there is the halo effect.
The tendency to like (or dislike) everything about a person — including things you have not observed — is known as the halo effect.

...

The procedure I adopted to tame the halo effect conforms to a general principle: decorrelate error!

One very useful example of a technique to "decorrelate error" deals with conducting a meeting to discuss a topic. Kahneman recommends having everyone write down their thoughts on the subject before the meeting begins.
The standard practice of open discussion gives too much weight to the opinions of those who speak early and assertively, causing others to line up behind them.
Finally, Kahneman introduces a concept that he returns to many times in the book: WYSIATI.
Jumping to conclusions on the basis of limited evidence is so important to an understanding of intuitive thinking, and comes up so often in this book, that I will use a cumbersome abbreviation for it: WYSIATI, which stands for what you see is all there is.
In other words, System 1 will totally react based on the only information available, even when that information has problems. This leads to several biases which are discussed in more detail later in the book: overconfidence, framing effects, and base-rate neglect.

In the chapter "How Judgments Happen", we learn that many basic assessments have been hardwired by evolution into System 1. One example of a judgement heuristic is how we make snap judgements of people based on their faces.

In about 70% of the races for senator, congressman, and governor, the election winner was the candidate whose face had earned a higher rating of competence.

...

the effect of facial competence on voting is about three times larger for information-poor and TV-prone voters than for others who are better informed and watch less television.

One place where System 1 really falls down is in doing integration or addition.
Because System 1 represents categories by a prototype or a set of typical exemplars, it deals well with averages but poorly with sums. The size of the category, the number of instances it contains, tends to be ignored in judgments of what I will call sum-like variables.
System 1 also has a way of making WYSIATI worse:
we often compute much more than we want or need. I call this excess computation the mental shotgun.
Yet another flaw of System 1:
If a satisfactory answer to a hard question is not found quickly, System 1 will find a related question that is easier and will answer it. I call the operation of answering one question in place of another substitution.
Hah, here's a fun fact. "Heuristic" comes from the same root as "eureka". Heuristics are what substitute the easy questions for the hard ones. Examples given are the mood heuristic for happiness, and the affect heuristic.


In Part II we look at more heuristics and learn how they become biases. For the rest of the book, one thing that becomes painfully apparent is that the human mind, particularly System 1, is incredibly bad at probability and statistics.

System 1 is inept when faced with “merely statistical” facts, which change the probability of outcomes but do not cause them to happen.
One example of how bad our intuitive judgements of statistics comes from scientists themselves. Kahneman and Tversky found it was not at all unusual for scientists to use sample sizes for studies that were way too small. Intuitively, they seemed big enough, but, if you did the math, you found that random chance could easily overwhelm the statistical conclusions that were being made.
Reseachers who pick to small a sample leave themselves at the mercy of sampling luck.
Kahneman refers to this as the law of small numbers.

[I have had personal experience with how little intuition we have about probability and randomness. 25 years ago, I wrote a complex system to assign candidates to examiners for an oral examination. My customer insisted that the assignment should be random. But random assignment produced clumps of good and bad candidates that would throw the examiners off. I finally convinced them that what they really wanted was balanced assignment, which did indeed greatly improve examiner performance.]

Random processes produce many sequences that convince people that the process is not random after all.

...

We are far too willing to reject the belief that much of what we see in life is random.

Kahneman expands on the law of small numbers.
  • The exaggerated faith in small samples is only one example of a more general illusion — we pay more attention to the content of messages than to information about their reliability, and as a result end up with a view of the world around us that is simpler and more coherent than the data justify. Jumping to conclusions is a safer sport in the world of our imagination than it is in reality.
  • Statistics produce many observations that appear to beg for causal explanations but do not lend themselves to such explanations. Many facts of the world are due to chance, including accidents of sampling. Causal explanations of chance events are inevitably wrong.
Another almost embarrassing bias is the anchoring effect.
It occurs when people consider a particular value for an unknown quantity before estimating that quantity.
It is a flavor of a priming effect.
If you are asked whether Gandhi was more than 114 years old when he died you will end up with a much higher estimate of his age at death than you would if the anchoring question referred to death at 35.
Kahneman for the 1st time gives advice on now to combat this bias: get System 2 activated by actively pursuing arguments against the anchor value.

Next we have the availability heuristic.

We defined the availability heuristic as the process of judging frequency by “the ease with which instances come to mind.”

...

The availability heuristic, like other heuristics of judgment, substitutes one question for another: you wish to estimate the size of a category or the frequency of an event, but you report an impression of the ease with which instances come to mind. Substitution of questions inevitably produces systematic errors.

We now learn more about the affect heuristic.
people make judgments and decisions by consulting their emotions: Do I like it? Do I hate it? How strongly do I feel about it?
The availability heuristic combines with the affect heuristic to produce an availability cascade.
the importance of an idea is often judged by the fluency (and emotional charge) with which that idea comes to mind.
I think we here in the US are now experiencing a potentially disastrous availability cascade: the presidential campaign of Donald Trump. The 1000s of hours of press coverage he has received has everyone's minds primed for more Trump news.

Another place where our ineptitude with probability shows is in our assessment of risks, referred to as probability neglect.

a basic limitation in the ability of our mind to deal with small risks: we either ignore them altogether or give them far too much weight — nothing in between.
A particular flavor of our poor statistical intuition is base-rate neglect. In an experiment in the early 70s, Kahneman and Tversky found that, if asked what field an individual with a stereotypical geek description was likely to be studying, subjects would base their answer solely on the description, and ignore the statistics of what majors are most common. The "base-rate" percentage that should be the starting point for an estimate of likelihood was "neglected" completely. The representativeness heuristic was used instead - the geek description was just too good a match for System 1 to ignore. If the subjects were asked to frown, which engages System 2, they "did show some sensitivity to the base rates."

Another way to combat this bias, and 2 points to remember:

instructing people to “think like a statistician” enhanced the use of base-rate information, while the instruction to “think like a clinician” had the opposite effect.

...

[1.] base rates matter, even in the presence of evidence about the case at hand

...

[2.] intuitive impressions of the diagnosticity of evidence are often exagerated

The fact that one has to come up with ways to try to engage System 2 is an example of the laziness of System 2.

I think I had heard of this one before, which goes beyond bias to just plain wrong thinking: the conjunction fallacy

which people commit when they judge a conjunction of two events (here, bank teller and feminist) to be more probable than one of the events (bank teller) in a direct comparison.
So event A by itself has a probability of being true, the probability of event A and a 2nd event B being true has to be less than the probability of event A by itself. But if event B helps tell a story, or could be causally related to event A, or otherwise makes System 1 happy, System 1 completely ignores logic.
The uncritical substitution of plausibility for probability has pernicious effects on judgments when scenarios are used as tools of forecasting.
I found this next statistical bias to be fascinating. The statistical concept of regression to the mean almost seems a little spooky. It simply says that, for values that cluster around an average, a high value will likely be followed by a lower value, and visa versa. System 1 completely ignores this law, preferring its many heuristics instead. Ha ha, I believe this anecdote about trials.
the statistician David Freedman used to say that if the topic of regression comes up in a criminal or civil trial, the side that must explain regression to the jury will lose the case. Why is it so hard? The main reason for the difficulty is a recurrent theme of this book: our mind is strongly biased toward causal explanations and does not deal well with “mere statistics.” When our attention is called to an event, associative memory will look for its cause — more precisely, activation will automatically spread to any cause that is already stored in memory. Causal explanations will be evoked when regression is detected, but they will be wrong because the truth is that regression to the mean has an explanation but does not have a cause.
Knowing how our minds mishandle estimates by ignoring regression to the mean gives System 2 a tool to do a better job. But
Following our intuitions is more natural, and somehow more pleasant, than acting against them.

...

Furthermore, you should know that correcting your intuitions may complicate your life. A characteristic of unbiased predictions is that they permit the prediction of rare or extreme events only when the information is very good.

So, despite all of the faults of System 1, it still does some pretty amazing work.


Part III is titled "Overconfidence". We put much more faith in our intuitions via System 1 than we should. But we would probably find life terrifying without some overconfidence. However, there several places in our society where this overconfidence is passed off as skill or wisdom.

1st we meet, via Nassim Taleb and "The Black Swan", the narrative fallacy

to describe how flawed stories of the past shape our views of the world and our expectations for the future.
We are always looking for the narrative, for the story that makes sense of what is happening. Sometimes this leads us astray. Sometimes there really is no narrative.
The ultimate test of an explanation is whether it would have made the event predictable in advance.

...

The human mind does not deal well with nonevents.

...

Paradoxically, it is easier to construct a coherent story when you know little, when there are fewer pieces to fit into the puzzle.

Our minds try to keep the story going in the past, as well as the present and the future. Again, we all overestimate how good we are at this.
Your inability to reconstruct past beliefs will inevitably cause you to underestimate the extent to which you were surprised by past events. Baruch Fischhoff first demonstrated this “I-knew-it-all-along” effect, or hindsight bias
Everyone's 20/20 hindsight leads to outcome bias.
Actions that seemed prudent in foresight can look irresponsibly negligent in hindsight.
Kahneman does not seem to be a fan of business books.
The sense-making machinery of System 1 makes us see the world as more tidy, simple, predictable, and coherent than it really is. The illusion that one has understood the past feeds the further illusion that one can predict and control the future. These illusions are comforting. They reduce the anxiety that we would experience if we allowed ourselves to fully acknowledge the uncertainties of existence. We all have a need for the reassuring message that actions have appropriate consequences, and that success will reward wisdom and courage. Many business books are tailor-made to satisfy this need.

...

Consumers have a hunger for a clear message about the determinants of success and failure in business, and they need stories that offer a sense of understanding, however illusory.

...

And even if you had perfect foreknowledge that a CEO has brilliant vision and extraordinary competence, you still would be unable to predict how the company will perform with much better accuracy than the flip of a coin.

Kahneman recounts how he came up with a rating methodology for army trainees. He felt good about it, it seemed logically consistent - and had pretty much 0 predictive value. But when the next group of trainees came through and he applied his methodology again, he felt just as good about it as before, even though he knew that statistically it had been proven worthless. He named this the illusion of validity.

He moves on to look at the predictive powers of stock market workers. As much as he is not a fan of business book authors, he is even less a fan of stock brokers.

my questions about the stock market have hardened into a larger puzzle: a major industry appears to be built largely on an illusion of skill.

...

on average, the most active traders had the poorest results, while the investors who traded the least earned the highest returns.

...

men acted on their useless ideas significantly more often than women, and that as a result women achieved better investment results than men.

...

the key question is whether the information about the firm is already incorporated in the price of its stock. Traders apparently lack the skill to answer this crucial question, but they appear to be ignorant of their ignorance.

Next up for skewering is the pundit class, be they political or economical. Here's the results of a study of 284 pundits, asked to pick 1 of 3 possibilities for several near-future political events.
The experts performed worse than they would have if they had simply assigned equal probabilities to each of the three potential outcomes.

...

Those who know more forecast very slightly better than those who know less. But those with the most knowledge are often less reliable. The reason is that the person who acquires more knowledge develops an enhanced illusion of her skill and becomes unrealistically overconfident.

We now meet Paul Meehl.
Meehl reviewed the results of 20 studies that had analyzed whether clinical predictions based on the subjective impressions of trained professionals were more accurate than statistical predictions made by combining a few scores or ratings according to a rule.

...

The number of studies reporting comparisons of clinical and statistical predictions has increased to roughly two hundred, but the score in the contest between algorithms and humans has not changed. About 60% of the studies have shown significantly better accuracy for the algorithms. The other comparisons scored a draw in accuracy, but a tie is tantamount to a win for the statistical rules, which are normally much less expensive to use than expert judgment. No exception has been convincingly documented.

...

In every case, the accuracy of experts was matched or exceeded by a simple algorithm.

I am totally onboard with these conclusions. I have been saying for decades how I would trust an AI to diagnose an illness more than a human physician. And I believe that studies in medical areas have shown that far better results are obtained via working from a checklist rather than relying on a physician's intuition. The TV show "House" always cracked me up. Every episode the brilliant clinician House went through 5-10 wrong diagnoses before finally stumbling onto the right one.

Why are experts inferior to algorithms? One reason, which Meehl suspected, is that experts try to be clever, think outside the box, and consider complex combinations of features in making their predictions. Complexity may work in the odd case, but more often than not it reduces validity. Simple combinations of features are better.
Even when using formulas, simpler is better. From the work of Robyn Dawes:
The dominant statistical practice in the social sciences is to assign weights to the different predictors by following an algorithm, called multiple regression, that is now built into conventional software. The logic of multiple regression is unassailable: it finds the optimal formula for putting together a weighted combination of the predictors. However, Dawes observed that the complex statistical algorithm adds little or no value.

...

formulas that assign equal weights to all the predictors are often superior, because they are not affected by accidents of sampling.

But, most people still prefer experts to algorithms. I think this will change, as we realize that our new robot overlords only want the best for us.
The story of a child dying because an algorithm made a mistake is more poignant than the story of the same tragedy occurring as a result of human error, and the difference in emotional intensity is readily translated into a moral preference.
I like that, after showing us the shortcomings of intuition, Kahneman gives us techniques for harnessing our intuition, by combining it with the hard data.
intuition adds value even in the justly derided selection interview, but only after a disciplined collection of objective information and disciplined scoring of separate traits.
So, can we ever trust the intuitions of a skilled expert? Kahneman says yes, depending on the skill.
The answer comes from the two basic conditions for acquiring a skill:
  • an environment that is sufficiently regular to be predictable
  • an opportunity to learn these regularities through prolonged practice
When both these conditions are satisfied, intuitions are likely to be skilled.
But we should also note the converse of this answer:
intuition cannot be trusted in the absence of stable regularities in the environment.

...

Whether professionals have a chance to develop intuitive expertise depends essentially on the quality and speed of feedback, as well as on sufficient opportunity to practice.

Kahneman recounts another personal anecdote introducing the outside view vs the inside view. The outside or objective view of, say, a project's success can differ significantly from the inside or subjective view of those involved.
Amos and I coined the term planning fallacy to describe plans and forecasts that
  • are unrealistically close to best-case scenarios
  • could be improved by consulting the statistics of similar cases
"Everybody else failed at this, but we're going to make it work! This time for sure!" But quite often we don't even bother to look at what everybody else has done.
people who have information about an individual case rarely feel the need to know the statistics of the class to which the case belongs.
The treatment for the planning fallacy is called reference class forecasting.

The optimistic bias, which Kahneman says "may well be the most significant of the cognitive biases", is part of what creates the planning fallacy. Chapter 24 is titled "The Engine of Capitalism" - probably everyone who creates a startup or small business does so with help from the optimistic bias.

The optimistic risk taking of entrepreneurs surely contributes to the economic dynamism of a capitalistic society, even if most risk takers end up disappointed.
So where do entrepreneurs go wrong? What cognitive biases undermine their efforts?
  • We focus on our goal, anchor on our plan, and neglect relevant base rates, exposing ourselves to the planning fallacy.
  • We focus on what we want to do and can do, neglecting the plans and skills of others.
  • Both in explaining the past and in predicting the future, we focus on the causal role of skill and neglect the role of luck. We are therefore prone to an illusion of control.
  • We focus on what we know and neglect what we do not know, which makes us overly confident in our beliefs.
Also contributing is the above-average effect:
"90% of drivers believe they are better than average"

...

people tend to be overly optimistic about their relative standing on any activity in which they do moderately well.

Here's a good line about one of those skilled experts we've talked about which I hadn't heard before:
President Truman famously asked for a “one-armed economist” who would take a clear stand; he was sick and tired of economists who kept saying, “On the other hand…”
Some more examples of optimistic bias:
inadequate appreciation of the uncertainty of the environment inevitably leads economic agents to take risks they should avoid.

...

“clinicians who were ‘completely certain’ of the diagnosis antemortem were wrong 40% of the time.”

Kahneman again provides us with a tool to use against these biases. In this case, it is the premortem, from Kahneman's "adversarial collaborator" Gary Klein:
“Imagine that we are a year into the future. We implemented the plan as it now exists. The outcome was a disaster. Please take 5 to 10 minutes to write a brief history of that disaster.”


Part IV begins with being reintroduced to Richard Thaler's Humans vs Econs. The definition of an Econ, from Bruno Frey:

“The agent of economic theory is rational, selfish, and his tastes do not change.”
Meanwhile we all know that Humans are nothing like Econs.
To a psychologist, it is self-evident that people are neither fully rational nor completely selfish, and that their tastes are anything but stable.
In this part we start to get overlap with Thaler's "Misbehaving" book blogged here. I will move quickly through the overlapping material.

The rational agent model is founded on expected utility theory. Kahneman and Tversky expanded on expected utility theory with their prospect theory, which occupied most of Chapter 5 of Thaler's "Misbehaving". The roots of these theories go back to Daniel Bernoulli in the 18th century.

Kahneman refreshes prospect theory with the ideas of this book.

it’s clear now that there are three cognitive features at the heart of prospect theory. They play an essential role in the evaluation of financial outcomes and are common to many automatic processes of perception, judgment, and emotion. They should be seen as operating characteristics of System 1.
  • Evaluation is relative to a neutral reference point, which is sometimes referred to as an “adaptation level.” ...
  • A principle of diminishing sensitivity applies to both sensory dimensions and the evaluation of changes of wealth. ...
  • The third principle is loss aversion. ... The "loss aversion ratio" has been estimated in several experiments and is usually in the range of 1.5 to 2.5.
We revisit the endowment effect, which was chapter 2 of "Misbehaving". I characterized it there as "a bird in the hand is worth 2 in the bush".

We learn more about loss aversion. In "Misbehaving" this is called "risk aversion". This is one of those things that evolution as "survival of the fittest" explains perfectly.

The brains of humans and other animals contain a mechanism that is designed to give priority to bad news. By shaving a few hundredths of a second from the time needed to detect a predator, this circuit improves the animal’s odds of living long enough to reproduce. The automatic operations of System 1 reflect this evolutionary history. No comparably rapid mechanism for recognizing good news has been detected. Of course, we and our animal cousins are quickly alerted to signs of opportunities to mate or to feed, and advertisers design billboards accordingly. Still, threats are privileged above opportunities, as they should be.
Kahneman agrees with that this is the root of the impulse to conservatism.
Loss aversion is a powerful conservative force that favors minimal changes from the status quo in the lives of both institutions and individuals.
Loss aversion shows up in law and business as findings that people are most incensed when companies "break informal contracts with workers or customers". I know I have personally annoyed when a software product drops support for a feature I liked.

Loss aversion interacts with our poor probability judgement skill, and seems to only be overcome when we are desperate. We seem to be pretty good at understanding "nothing", and also "all", as in "all or nothing". But add a small possibility to "nothing", and we get the possibility effect, "which causes highly unlikely outcomes to be weighted disproportionately more than they “deserve.”". The same thing happens at the "all" end via the certainty effect.

Outcomes that are almost certain are given less weight than their probability justifies.
So the expectation principle, which goes back to Bernoulli, "by which values are weighted by their probability, is poor psychology."

Quantifying these effects, Kahneman and Tversky "carried out a study in which we measured the decision weights that explained people's preferences for gambles with modest monetary stakes." Here are their numbers:

Next, we get the fourfold pattern, shown below.

The top left cell was discussed by Bernoulli:

people are averse to risk when they consider prospects with a substantial chance to achieve a large gain. They are willing to accept less than the expected value of a gamble to lock in a sure gain.

... the bottom left cell explains why lotteries are popular.

... The bottom right cell is where insurance is bought.

The top right cell they found surprising, and a source of new insights.
we were just as risk seeking in the domain of losses as were risk adverse in the domain of gains.

...

Many unfortunate human situations unfold in the top right cell. This is where people who face very bad options take desperate gambles, accepting a high probability of making things worse in exchange for a small hope of avoiding a large loss. Risk taking of this kind often turns manageable failures into disasters.

Despite its evolutionary value, loss aversion now works against us.
[this is] how terrorism works and why it is so effective: it induces an availability cascade.
Another cognitive bias is referred to as the denominator neglect. Given 1 winning chance in 10 or 8 in 100, most people will pick 8 in 100 - more chances to win! They ignore the denominator, which gives the 2nd choice an 8% probability of success vs 10% for the 1st choice.

Here's another one that is almost hard to believe, but it has been born out by studies. People will be more impacted by "10 out of 100" than "10%. We can add percentages to probability and statistics as things that the our mind, particularly System 1, does not handle well.

The discrepancy between probability and decision weight shown above in Table 4 is characterized as

  • People overestimate the probabilities of unlikely events.
  • People overweight unlikely events in their decisions.
This gives us 2 more cognitive biases, overestimation and overweighting.
The successful execution of a plan is specific and easy to imagine when one tries to forecast the outcome of a project. In contrast, the alternative of failure is diffuse, because there are innumerable ways for things to go wrong. Entrepreneurs and the investors who evaluate their prospects are prone both to overestimate their chances and to overweight their estimates.

...

Ralph Hertwig and Ido Erev note that “chances of rare events (such as the burst of housing bubbles) receive less impact than they deserve according to their objective probabilities.

The end of the discussion of rare events concludes with a scary sentence, particularly when you think about how successful we will be in addressing the climate crisis.
Obsessive concerns ..., vivid images ..., concrete representations ..., and explicit reminders ... all contribute to overweighting. And when there is no overweighting, there will be neglect. When it comes to rare probabilities, our mind is not designed to get things quite right. For the residents of a planet that may be exposed to events no one has yet experienced, this is not good news.
We are given more evidence of the mind's lack of skill at integrating or summing in a discussion of narrow framing - making lots of small decisions - vs broad framing - coming up with a comprehensive plan. Econs would use broad framing most of the time, but real Humans will almost always use narrow framing.

Kahneman gives us another tool to use against this bias: risk policies.

Decision makers who are prone to narrow framing construct a preference every time they face a risky choice. They would do better by having a risk policy that they routinely apply whenever a relevant problem arises.
Next we have a discussion of mental accounts - these were discussed in Chapter 11 of "Misbehaving".
mental accounts are a form of narrow framing; they keep things under control and manageable by a finite mind.
Our mental accounts, where we track the value and cost of various aspects of our life, are of course prone to cognitive biases.
finance research has documented a massive preference for selling winners rather than losers — a bias that has been given an opaque label: the disposition effect ... - an instance of narrow framing.

So you sell a winner, you feel good about yourself - I'm a winner! But when you sell a loser, you are admitting a failure and taking a realized loss right then and there. Rather than doing that, most people will keep the loser - an instance of the sunk-cost fallacy, discussed in Chapter 8 of "Misbehaving". Speaking of a corporate manager who refuses to give up on a losing project, Kahneman states:

In the presence of sunk costs, the manager’s incentives are misaligned with the objectives of the firm and its shareholders, a familiar type of what is known as the agency problem.
On a different note, Kahneman discusses the emotion of regret:
The fear of regret is a factor in many of the decisions that people make ...
Kahneman references the claims of Daniel Gilbert on regret - more advice for us in dealing with our biases.
you should not put too much weight on regret; even if you have some, it will hurt less than you now think.
2 more biases. The 1st deals with "sins of commission" vs "sins of omission". Logically, if the (bad) outcome is the same, what difference does it make if it came about through action rather than inaction? Maybe it gets back to the "first do no harm" principle of medicine, which is yet another flavor of loss aversion.
people expect to have stronger emotional reactions (including regret) to an outcome that is produced by action than to the same outcome when it is produced by inaction.
The 2nd bias is the "taboo tradeoff against accepting any increase in risk". Particularly where children are involved, we don't do a good job evaluating how to address risk in the face of limited resources. We refuse any increase in risk.

Preference reversals are discussed in chapter 6 of "Misbehaving". People will reverse their choices illogically depending on how they are framed. It frequently shows up when options are weighed separately rather than together. Christopher Hsee addressed this with his evaluability hypothesis. Kahneman reminds us again that

rationality is generally served by broader and more comprehensive frames.

...

Broader frames and inclusive accounts generally lead to more rational decisions.


Part V is titled "Two Selves". We are introduced to another dichotomy to go along with System 1/System 2 and Econs/Humans: experienced utility and the experiencing self vs decision utility and the remembering self.

The experiencing self is the one that answers the question: “Does it hurt now?” The remembering self is the one that answers the question: “How was it, on the whole?”

...

Confusing experience with the memory of it is a compelling cognitive illusion

...

What we learn from the past is to maximize the qualities of our future memories, not necessarily of our future experience. This is the tyranny of the remembering self.

Kahneman describes an experiment what produced results that seem to totally fly in the face of common sense. Subjects were subjected to 2 slightly painful experiences: having a hand placed in cold water for 60 seconds; and having a hand placed in cold water for 60 seconds, followed by slightly warming the water for 30 seconds. When subjects were then asked which experience they would choose for a repeat performance, they chose the 2nd! They chose more pain rather than less! This shows the dominance of the remembering self, and yet another instance of our minds' inability to integrate or sum: "System 1 represents sets by averages, norms, and prototypes, not by sums.". We get 2 more components of our mental makeup:
  • Peak-end rule: The global retrospective rating was well predicted by the average of the level of pain reported at the worst moment of the experience and at its end.
  • Duration neglect: The duration of the procedure had no effect whatsoever on the ratings of total pain.
Interestingly, these 2 components also work the same for life as a whole.
A story is about significant events and memorable moments, not about time passing. Duration neglect is normal in a story, and the ending often defines its character.

...

In intuitive evaluation of entire lives as well as brief episodes, peaks and ends matter but duration does not.

I'm not sure I agree with Kahneman's last conclusion on this subject. These 2 selves could maybe be characterized by the Zen self vs the non-Zen self. Maybe a Zen master who can really live totally in the here and now contradicts this conclusion. But, I have no knowledge that such a Zen master exists.
Odd as it may seem, I am my remembering self, and the experiencing self, who does my living, is like a stranger to me.
Talking about one's experience of life as a whole, Kahneman introduces us to Mihaly Csikszentmihalyi and his concept of flow - a state that some artists experience in their creative moments and that many other people achieve when enthralled by a film, a book, or a crossword puzzle. Hah, I knew I recognized that name, I mention having read his book "Creativity" in my 3rd blog post, May 6, 2003. Kahneman mentions the studies on overall well-being vs monetary status.
The satiation level beyond which experienced well-being no longer increases was a household income of about $75,000 in high-cost areas (it could be less in areas where the cost of living is lower). The average increase of experienced well-being associated with incomes beyond that level was precisely zero.
My wife doesn't read this blog, so I think it's safe to include this here.

the decision to get married reflects, for many people, a massive error of affective forecasting.

...

Daniel Gilbert and Timothy Wilson introduced the word miswanting to describe bad choices that arise from errors of affective forecasting.

Given what we have learned about our minds' inability to sum over time, it is not surprising that we are not very good at determining what will make us happy, past, present, or future. When people are asked how life is going now, the mood heuristic is ready to jump in and report whatever their current mood is.

We now get our final cognitive bias:

the focusing illusion, which can be described in a single sentence:
Nothing in life is as important as you think it is when you are thinking about it.
...

The essence of the focusing illusion is WYSIATI, giving too much weight to the climate [of CA vs the midwest, in an experiment], too little to all the other determinants of well-being.

...

The mistake that people make in the focusing illusion involves attention to selected moments and neglect of what happens at other times. The mind is good with stories, but it does not appear to be well designed for the processing of time.

One final excerpt that I thought was interesting, on how we acclimate to novelty:
over time, with few exceptions, attention is withdrawn from a new situation as it becomes more familiar. The main exceptions are chronic pain, constant exposure to loud noise, and severe depression.


This was a very enjoyable and informative book. System 1, with its heuristic-based intuitions providing us with the snap judgements that help keep us alive, is a marvel of evolution. Of course, every part of life is a "marvel of evolution". But, we now have a laundry list of things System 1 is NOT good at: percentages, probability, statistics, integration, summation, broad framing.

With regard to economics, we again get the message that "Misbehaving" delivered: that Econs are a complete and utter fiction. I think in "Misbehaving" a statement is made to the effect that "Yeah, there are no Econs, but without them we couldn't have developed economics". So Econs are a simplifying assumption. I continue to have the nagging thought that it is too big of a simplification, such that I question the entire framework of economics.

One of the "killer apps" for economics is predicting boom and bust cycles. I was around for the dotcom bubble in the late 90s, the housing bubble of the mid 00s, and the Web 2.0 (social media) bubble, which is still active. In all 3 of these, I observed the same characteristics. Here are examples of people's (including my) thinking during these bubbles:

  • This is a sure thing.
  • Everybody else is making big $$$.
  • I'm going to miss out.
  • I better pile on.
I think all of these are the indicators of herd mentality. I have yet to come across attempts to incorporate this mammalian (?) behavior into economics. (Hmmm, the wikipedia article says "herd behavior is closely studied by behavioral finance experts in order to help predict future economic crises". So, never mind? No, it is a slim article, the above quote references a single paper, and there has not been a book written on the topic in 100 years).

Another place where I think herd mentality could be the secret sauce is predicting demand. Fashion, fads, and "keeping up with the Joneses", expertly manipulated by the marketing machine, all incorporate herd behavior.

It's funny, my hero (before he became such a Hillary shill) Paul Krugman says he wanted to work on psychohistory like Harry Seldon in Asimov's "Foundation" novels, and economics is what is closest. But psychohistory was all about predicting herd behavior. So where is it really, in modern economics? I guess I'll have to check out that paper.

A final personal note, I find it interesting that I used to do a lot more reading in cognitive science, and found cognitive illusions particularly interesting. I start studying (slowly) economics, and find, in behavioral economics, the integration of those concepts. In another life behavioral economics could be a good field for me.