Why do financial crises keep repeating themselves ad nauseum? Is the system broken, or is the human condition fundamentally flawed? As a contemporary example, how did the color of an ordinary dress become an international debate? What does that say of modern society? Did users tweet about the dress because they thought it was something of importance, or simply because others were treating it as such? In social psychology there is a well-researched phenomenon called the “bystander effect,” which states that the greater the number of bystanders, the less likely it is that any one of them will help. One explanation of this effect is that the individual bystander perceives the inaction of others as evidence that help is not needed. The common theme among these seemingly unrelated phenomena is herding mentality: we tend to follow what others are doing, and in turn accelerate adoption via the virality effect.
Have you ever picked a bottle of wine as opposed to the one beside it because it was priced higher (hence “it must be better”), or chosen a particular brand of juice because the bottle looked nicer? Do you tend to buy a stock because it has been going up, or short it when it is going down? The digital age has greatly facilitated the sharing of opinions through social media and online review and rating systems. Speaking from personal experience, I cannot count the number of times I have used Yelp to choose a restaurant, or relied on Amazon’s star rating to pick a brand.
The human herding effect spreads wider and faster today than ever before. But fundamentally, why do we value others’ opinions so much? One cognitive reason may be because we are not comfortable dealing with uncertainty in decision making. We feel the need, or sometimes are incentivized, to provide a complete rationale for the choices we make. The rationale is readily available if others are making the same choices. More generally, either in explaining past outcomes or in predicting future outcomes, we tend to place an undue amount of weight on socially validated factors (such as others’ opinions or actions). This has two important negative implications.
The first negative implication is that we tend to ignore or de-emphasize scientifically validated factors. Did an investment strategy perform well because the portfolio manager is smarter than his peers, or because of the specific sector/industry he happened to be in? Have you ever found that a user’s rating is not necessarily consistent with his review, especially if you only consider the factual portion versus his opinion? Scientific inference often requires more work and cannot be as easily conducted, nonetheless it is no excuse for bypassing due diligence.
The second and perhaps even more important negative implication is that we often grossly under-estimate the contribution of pure randomness in determining outcomes. In the financial industry, attributing much of your success to luck is unpalatable to investors and portfolio managers alike. Instead, it is easier, and in a portfolio manager’s best interest, to attribute superior performance to talent and strength of strategy. Social psychologists have long realized the fundamental unpredictability of human behavior, and observed that we often fail to make sufficient inferential allowance for the role of randomness. There is a famous (or infamous) “curse of dimensionality” in statistical learning, where even “big data” become sparse in very high dimensional spaces, making it fundamentally difficult to obtain models with high predictive power. It is not surprising that this type of scientific challenge is not widely recognized, given its implications to human rationality. However, once we accept our inability to consistently have high confidence in decision making, embracing this intellectual honesty as a virtue rather than an admittance of incompetency, the social pressure for group think can be alleviated. Only then can we start to have a true appreciation of the role of randomness.
What can help us get there? The answer may lie with a simple technique: coin flipping, or more generally, randomization. If each bystander flipped a coin to decide whether or not to help, more people would come forward, eliminating the bystander effect. If quantitative portfolio managers incorporated an appropriate randomizer in their models to account for parameter uncertainty and more importantly, model uncertainty, they would be much less likely to end up in crowded trades. If consumers adopted some warranted randomness in how they chose a brand or a product, they would be less prone to advertising manipulation.
To clarify, due diligence should never be skipped in favor of coin flipping. It is only warranted for augmenting decision making when a deterministic scientific approach is not attainable, either because resource constraints preclude a sufficiently thorough study, or because of the fundamental unpredictability of said decision outcome. A bystander should only use coin flipping if she has the capacity to help. A “random” investment decision should only be made after rigorous research has been conducted and undesirable alternatives have been ruled out.
One benefit of explicitly incorporating a randomizer in decision making is that it forces us to be self-honest about how we arrive at decisions: which part is based on a sound rationale? And which part is no better than a randomizer? This is especially relevant in today’s fast-paced work environments. We often need to make decisions in a short time frame. How many of those decisions would have been made more honestly with a simple coin flip?
In addition, once we properly delegate some of the decision making to a randomizer, brain power can be freed up so we are less prone to decision fatigue. Just as we do not like to think about how to brush our teeth every morning, it is not worth the brain power to focus on something that is completely subject to randomness.
Interestingly, the greatest beneficiary of a randomizer may be a herd or a system of decision makers, such as our financial system. If we all adopted some level of randomization in decision making, we would not be blindly following others, thus becoming a wiser crowd. At a system level, because of the law of large numbers, “aggregated decisions” would actually be more predictable since individual randomizers would net out. Only the scientific components would remain, rendering the system less prone to extreme fluctuations.
Certainly, randomization as a decision-making strategy is not new. It has an important role in game theory, the simplest example of which is the game of Rock-Paper-Scissors. Since an intelligent opponent can potentially learn any deterministic strategy you may employ and use it against you, the best way of avoiding losing consistently is to randomize your moves. Real world situations are much more complex than a simple zero-sum game between two opponents. However, the merit of randomization is much more than just not losing to our opponents. It hopes to save us from erroneously herding to our collective detriment.