I have a confession to make: one of my favorite intellectual mistresses is Economics.
Most people don’t get to see her really glamorous parts. I mean, seriously: things like Investment Banking or Fiscal Policy can turn you on, but if you get a chance to check out Behavioral Economics, it will put your entire limbic system on fire.
Behavioral Economics has been (deservedly) taking over the world of economics lately. This new field radically changes the way we understand consumer behavior and how companies should approach people if they are to win their hearts. It’s the drastic opposite of the classical economics, which assumes that humans are rational agents(homo economicus). We know humans aren’t that rational and clinging onto that assumptions can only lead to hazardous corporate strategies and poor economic policies. My favorite analogy is that “If humans were comic book characters, we’d be more closely related to Homer Simpson than to Superman.” And we have the research from Cognitive Psychology to back that up.
The task for Behavioral Economists now is to compel companies to make business strategies based on the new discoveries in the field as well as trying to convince (with not so much luck) policy makers to Make Policy for Real not Ideal Humans. On a positive note, the 2015 World Development Report [PDF] from the World Bank quite surprisingly focused on how Behavioral Economics can help improve development policies. On that some other day.
Anyway, I was lucky to learn Behavioral Economics from the best: the works of Daniel Kahneman. I had the chance to read only one of his papers but I took time to study his magnum opus: Thinking, Fast and Slow. (There are some better books for a gentler introduction to the field though.)
This is no review of the book.
That being said, the book explores many fundamental concepts of the field but most notably it lays down one of the best references of “cognitive biases and heuristics”.
Today, I’m interested in exploring one of them.
The Planning Fallacy
Humans cannot plan. Period.
You may already know at least from that homework which was not finished even though you thought you could do it in the last two hours before the deadline.
But it can get more serious than that.
We see corporate projects as well as various governments projects taking much more money and taking much longer than “anticipated”. In such cases, people will shout that it’s because of the inefficiency or corruption of some bureaucrats. It very likely is . However, in some if not most situations, it can hint to a more fundamental problem of the human condition: We suck at planning. (Remember Hanlon’s Razor: never attribute to malice that which is adequately explained by stupidity.)
Ask anyone a “realistic” forecast about anything, most of them will envision everything going exactly as planned, with no unexpected delays or unforeseen catastrophes – the same vision as their “best case”. And then reality will sink in.
It’s a pattern and the experiments to prove it have been replicated many times.
Buehler et. al. (1995) asked their students for estimates of when they (the students) thought they would complete their personal academic projects. Specifically, the researchers asked for estimated times by which the students thought it was 50%, 75%, and 99% probable their personal projects would be done. Would you care to guess how many students finished on or before their estimated 50%, 75%, and 99% probability levels?
13% of subjects finished their project by the time they had assigned a 50% probability level;
19% finished by the time assigned a 75% probability level;
and only 45% (less than half!) finished by the time of their 99% probability level.
As Buehler et. al. (2002) wrote, “The results for the 99% probability level are especially striking: Even when asked to make a highly conservative forecast, a prediction that they felt virtually certain that they would fulfill, students’ confidence in their time estimates far exceeded their accomplishments.”
That’s why when I am starting to work on a new project, I love to constantly remind myself of Hofstadter’s law.
Hofstadter’s Law: It always takes longer than you expect, even when you take into account Hofstadter’s Law.
Discussions on the Planning fallacy and how to fight it can get interesting and pretty long.
That’s not my mission today. What I want today is to introduce a new, somehow similar bias, that hasn’t been researched enough (if at all, as far as I know.)
The If-Would Fallacy
Premise: If Klara and Alois Hitler had made love ten minutes earlier on the night that Adolph Hitler was conceived, a different sperm would have fertilized the (or a different) egg resulting in someone with a completely different DNA structure or no egg fertilization at all. Adolph would not have been born.
Claim: The Nazis would not have killed all those innocent people. And the twentieth century would have been much better. Right?
There is also a chance that one of the guy that Adolph Hitler conquered would have been, ethically speaking, a bigger danger. Or maybe the Nuclear crises that later followed would have resulted in an apocalypse, wiping out all of us. There was plenty room for things to have been worse, too.
When we imagine what would have happened under possible circumstances that didn’t happen, we fail to assess realistic probabilities and fall for what I’m calling the “If-Would Fallacy”, just like we do for the Planning Fallacy.
Mathematically speaking, we assume that, if F is the function that governs the Universe, for every input circumstance X; F( X + alpha) will be somewhere close to F(X), aligning with what we wish for. (Simplicity is all that our brain is trying to achieve after all.)
I have science on my side. Take the “butterfly effect” which simply states that a small change in one state of a deterministic nonlinear system can result in large differences in a later state. The idea was introduced, in a published paper, where Lorenz wondered if the Flap of a Butterfly’s Wings in Brazil could set off a Tornado in Texas.
I don’t want this essay to get mathematical (it’s too early to bring back some nightmares about my Electromagnetism lectures at KAIST anyway) but you get my point. A small change in the past could have led to far more unanticipated scenarios.
Chaos: When the present determines the future, but the approximate present does not approximately determine the future.
And chaos is, ladies and gentlemen, what our universe is.
But we cling onto our wishes. It’s more like cosmic porn: fantasizing about a parallel universe where everything is in perfect order, and giving it superior probability as compared to other infinite number of other possible parallel universes. So human.
The root problem of this problem is that when were are assigning probabilities to alternative realities, we are primed by the actual reality that we have already lived and hence wrongly attribute high priors to the alternate realities that resemble it.
And we do this all the time.
Like when you think about what would have happened if you met this person before she was married.
When you think about how your world would be different if you gave him/her another chance. (S/he –equally if not most — probably would have died the next day).
[Hint: If you plan to listen to this song with your significant other around, warn her about the intended pun, or else you will be single by the end of the song.]
For example, it’s possible, yes, that no body would have discovered the vulnerability in the Miss Rwanda 2015 online voting system, if I didn’t expose it. But it’s statistically more likely that someone who has a background in the technology but not in the ethics of Cybersecurity would have discovered the hole and used it for malicious intents. That would have been of course a worse condition than say, the current: having no online voting at all for Miss Rwanda 2016. One can also wonder what would have happened if the people behind the project had responded to my messages before I made the blog post public.
There are many more examples.
What if you really didn’t go to that party, attend that school or order the other ‘last’ drink?
What if you picked a different major, attended a ‘better’ school, or took a different job offer?
We fantasize about those scenarios and make ourselves — or whomever else we want– the main characters of the whole plot. We expect everything to go as it did, except that little thing we want out.
It’s not just you. Everyone falls for the If-Would fallacy.
Recently, I saw a tweet from Paul Graham, a man for whom I otherwise have deep respect. He went on to say this:
Even if we assume that was merely Job’s accomplishment, did Paul Graham take into account the probabilities of everything that could have been different in the course of events if that little thing changed? Maybe the number of share he was given played a role in conditioning his performance. Why assume a positive correlation with only one point? Paul Graham, really?
And he isn’t alone, to be fair. How many of us have suggested that Eduardo Saverin leaving Facebook in 2005 was a bad idea? That if he stayed and committed himself, he would have had more shares and more millions. But, in truth, Facebook most likely would have fared worse if he stayed and he thus would not have become a billionaire today.(Regression towards the mean, anyone?)
How much would Brian Acton (Whatsapp founder) be worth today had he been offered the job at Facebook back in 2009?
What if LBJ didn’t win the 1948 senate elections? Would the civil rights bill have passed?Would there have been the Vietnam war? Would we, today, have a black US president?
What would it be like in Westeros if Robb didn’t send away Theon Greyjoy?
It is important to note that I’m not undermining the value of learning from past alternative course of events. I’m just calling for a more calibrated approach. Avoiding being primed by the actual reality we have lived.
More importantly, more focus should be on how to deal with the world as it is(correcting what we can), than to fantasize about how it would have been because frankly speaking, we have no idea.
I have many more examples of the manifestation of the If-Would fallacy and how it can affect present decisions in real life even on a large scale (in policy making, startup funding, crisis management, Anti-terrorism, …) and I think this is a topic that should be explored more.
Nevertheless, I’m still convinced that if the field of Behavioral Economics had not been invented, I most probably would be writing about something else.