With facts, “alternative facts” and statistics, how can we make wise decisions? Which numbers can be trusted? This article focuses on decision making. Today I welcome you to read here, here, here, and here.
Decision making vs information gathering
When we make decisions we can be biased. In fact, it makes sense to be biased. If we go with our automatic intuitive ideas we make ideas faster and almost effortlessly. Choosing an idea with high risk is not wise, even if on average we win more than lose. Not trying something with high success potential is also not very wise, even though we understand that we are not likely to succeed. Either way, we need to evaluate the chances.
Evaluate the chances
How do we evaluate chances? We are not likely to have statistics for exactly the specific situation we face. Most situations are not repeatable. Suppose we have a great business idea and we know that 9 out of 10 businesses become bankrupt during the first five years. Should we not try? Will we regret it? Suppose our first business was a success. Does it mean that the second is also more likely to succeed?
Our actions are not independent. From every success or failure, we are likely to learn something. If we fail long enough, we might still succeed eventually. When someone is creative, wise, and connected enough to make successful businesses time after time, he can still become overconfident and fail.
We have two systems of evaluating the chances. There is an average ratio of all the attempts and there is a penalty for the specific use case. By the way, if we invested in the specific use case we are likely to value it 60% higher than we would evaluate it otherwise.
Publication bias
A serial entrepreneur is more likely to succeed than a newcomer. One of the statistics I read states:
- A successful entrepreneur has 30% success in the next venture
- A failed entrepreneur has 20% success in the next venture
- A first-time entrepreneur has 18% success rate.
(These numbers represent relatively low-risk sectors. )
According to the statistics, previous success provides a 66% bonus to the success rate and previous failure 11% bonus vs first-timer. Why do we hear so many PR of first-time entrepreneurs?
- People who fail prefer not to talk about it. Maybe the first-time entrepreneur failed a couple of times and simply does not mention it.
- Journalists do not interview failures, except for massive failures.
- Successful serial entrepreneurs do not want the wrong kind of attention. They want to choose who knows about their successes.
- Press releases are often generated to get further investments. People and funds who are very successful do not really need journalists.
The data that is published is seriously biased toward the news-worthy outcome. For example, entrepreneurs who did not get serious funding or relevant filing, simply do not participate in “official” statistics.
Selection bias
At the beginning of COVID19, the mortality rates reported by statistics were huge. Then it became clear that only people with severe symptoms are tested, and possibly 90% of the cases are asymptomatic. Later it became clear that the huge mortality in Italy and Wuhan was partially due to the collapse of local hospitals. When hospitals function properly, the death rates are x10 lower. Suppose the measured number was 3% mortality rate, the actual rate in good conditions is x100 lower. The lower mortality rate was arguably more dangerous, as so many cases avoided detection and prevention.
It was very hard to make correct decisions based on the original biased data. The time and effort required to generate alternative numbers are significant. For the COVID19 example, one could measure ALL the citizen of a small Italian town, and see the correct rates for that town. Such an experiment was actually done. These rates were indicative of that town, but not of other places with lower infection rates.
Usually, different experiments provide the upper and lower bounds for our statistical measurement under different assumptions. Which number to use for the actual computation is decided based on our due diligence, intuition, and sensitivity.
How good is due diligence?
Suppose you are a Richy rich with 100K USD to spare. Someone you know comes to you and tells of an opportunity: “I know this great guy with a cool ner idea who needs 60K USD to make millions.” You meet the guy, and his pitch sounds interesting. What can you check about the business?
- Check the people. Good nice people will have a reasonable CV. They will meet your expert and sound smart. There will be no legal dirt. Of cause, there are also criminals that generate a great first impression. They could generate an entire persona to impress. People who lose money to an impostor or a phishing scheme do not talk about it, yet they are common.
- Verify the idea. If the idea is really great, it will be pretty hard to say anything about it. Ideas that are not new have statistics. Usually, there is one market leader, a couple of companies or divisions of larger companies that try to compete, and several dozen losers. If the market or technology changed, the idea will be new. If it does not sound like nonsense the experts will OK it.
- Evaluate progress. Any good group of people will try to show amazing progress. The progress report will indicate an amazing progress rate. This progress rate probably generated especially for you, and nobody can support huge momentum for a long time without further investments.
- Review future projections. Experienced people know exactly the most important indicators and can explain where they took the values for these indicators. The future projections are built on very rosy estimates to generates the revenue rates that will excite investors.
There are many more due diligence steps that deal with legal factors, accounting, assets… All of these factors are likely to be inconclusive. This is a dating game. Each party wants to present itself plausibly, and there are not enough resources for all plausible candidates. Everybody will apply some cosmetics to look more attractive. Due diligence may show these factors, but probably not uncover the big issues. The big issues will usually be hidden under the “too early to know” category. Investors are likely to put their money in one out of 100 companies they evaluate, based on intuition. Possibly because someone else also wants to invest.
Can we trust our intuition?
So even the smartest, best-informed people usually make decisions based on intuition. They gather all the statistics they can, try to improve the statistics using additional information, make due diligence, and still trust intuition in their most important decisions. Why?
The financial markets are assumed to be sophisticated, and yet… Certain companies develop algorithms and strategies for arbitration. There are so many factors involved in decision making that no one can consider all of them. So at any given moment, there are investors betting on a long-term trend, traders betting on short-term trend, hedge funds with enough data to consider trend reversal, and common people who put their bets almost arbitrarily. All of these bodies can make money and justify their decisions. When all the technical indicators are great, the fundamental indicators are questionable. Different graphs on different time bases show different figures.
Almost every intuitive decision can be based on some reasonable justification. Moreover, some decisions need to be taken quickly.
What about alternative facts?
We get most of our facts from researchers. The researchers have conflicting measurement techniques, different grants, and personal grudges. This means we will get very different measurements even for a common indicator. Then we will get different indicators. Some researches will be global, other American, or European. Even the best scientists have “commonality bias” and want approximately the same results as their peers.
Politicians, lobbyists, and some other interesting factors may “invent” facts to support their claims. It is often very hard to disprove invented facts. The second gulf war (2003) was mostly based on imagined facts presented to the US president and the leaders of other countries. Saddam Hussein at that time did not have any of the abilities that were attributed to him. He bluffed, and his bluff was multiplied by security forces and their paranoia.
Strong opposition with strong media can argue about the validity of alternative facts. Congress could investigate president Trump even when everybody knew that the impeachment was unlikely. Certain people can tell lies and believe in the lies they tell, and also accuse everybody else is lying. If these people are supported by significant political powers, these lies may become a self-fulfilling prophecy.
Decision-makers create statistics to justify their decisions and they accuse everybody in data manipulations and inferior motives. Not good.
P-hacking
Scientists work hard to discover a statistically meaningful result. These results often make no sense. Quantum physics has no reasonable theory to support some of the theories with an overwhelming experimental basis. So other sciences feel comfortable presenting good statistics backed up by unreasonable theories.
How does that meet decision-making theories? Science needs to predict future results. Quite often the statistics discovered by scientists are so impressive that they get resources to check their ideas. The entire 70-years history of the Soviet Union is such an experiment on a grandiose scale. Carl Marx had a very nice statistical result with a cool hypothesis, and half of the world decided to test it after several bloody revolutions. The irony: soon after successful revolutions, the economical trends reversed in the countries that avoided revolutions.
In God we trust
Since we do not really trust facts, and we still need someone to trust, we trust wise men. Not some mythological wise men, but authorities in their field who read and wrote a lot of books. This approach is possibly the oldest and most intuitive way to get justified information.
If there is an oracle who made many decisions and was usually right, probably he has some ways of knowing. This is another statistical bias. If you take a billion men and ask them to make random decisions, about 1000 of them will make 20 correct decisions. Being correct 20 times, what are the chances? One in a million!
Our decisions are not binary random, and even oracles can occasionally be wrong. There is a publication bias, and nobody will read about that. So there are aways some oracles who have been successful many times and have a wise thing to say.
This does not confirm or deny any heavenly intervention. Simply occasionally rare events will happen. We take these random results and try to see a trend in them. Then we act according to this trend, hoping that there are some unseen factors modifying the statistics. This approach may randomly work or fail.
Even the wisest people occasionally fail. We should still respect them. Simply we should question everything they say and apply our common judgment. Typically wise people are honest about the sources of their judgment, so it is recommended to ask them.
What can we do?
Every reasonable method we use adds some information. We want to make informed decisions and avoid cognitive dissonance. So probably, we want to have both facts and intuition supporting the same decision. Other than that, any decision is a bet, even an informed one. Everybody fails from time to time. We should be on average better than the competition and somehow insured against catastrophic failures.