This post is also available in: हिन्दी (Hindi)
A cognitive bias is a systematic pattern of deviation from norm or rationality in judgment. Individuals create their own “subjective reality” from their perception of the input. An individual’s construction of reality, not the objective input, may dictate their behavior in the world.
There are four problems that give rise to biases are:
- Information Overload
- Lack Of Meaning
- The Need To Act Fast
- How To Know What Needs To Be Remembered For Later
In this article, we will look into the second type of Cognitive Bias i.e., “Lack of Meaning” or “Not Enough Meaning”.
The world is very confusing, and we end up only seeing a tiny sliver of it, but we need to make some sense of it in order to survive. Once the reduced stream of information comes in, we connect the dots, fill in the gaps with stuff we already think we know and update our mental models of the world.
Decision Making Biases
“Lack of Meaning” type of cognitive bias is characterized by the following:
1. We find stories and patterns even in sparse data
Since we only get a tiny sliver of the world’s information and also filter out almost everything else, we never have the luxury of having the full story. This is how our brain reconstructs the world to feel complete inside our heads.
Confabulation is a type of memory error in which gaps in a person’s memory are unconsciously filled with fabricated, misinterpreted, or distorted information. When someone confabulates, they are confusing things they have imagined with real memories.
A person who is confabulating is not lying. They are not making a conscious or intentional attempt to deceive. Rather, they are confident in the truth of their memories even when confronted with contradictory evidence.
Example: A person with dementia may be able to clearly describe the last time they met with their doctor, even if the scenario they depict never actually happened.
Effects: Many consequences can result from confabulation, some of the more serious than others.
- Counseling ramifications: In addition to family members and friends, confabulation can have a profound impact on mental health professionals. Treating these challenging clients is complicated by the fact that mental health professionals cannot rely on the information provided by their clients, which can result in tedious, repetitive, and frustrating interactions.
- Family impact: A systemic approach to addressing confabulation involves consideration of the impact that confabulation has on family members. As can be expected, confabulation often results in family members feeling sad, fearful, frustrated, or angry. Confabulation also affects the trust between family members.
- Legal considerations: Because many legal processes strongly rely on the memory of a suspect, defendant, or witness, confabulation can have significant consequences in this arena.
- Suggestibility: Individuals who suffer from confabulation may also be prone to suggestibility. Specifically, these individuals may be likely to adopt the statements or views of others when prompted by repeated questioning and negative feedback.
Why Does It Happen?
Confabulation is often the result of brain disease or damage. Some of the conditions that are linked to confabulation include memory disorders, brain injuries, and certain psychiatric conditions.7 There are several psychological and neurological conditions associated with confabulation, including:
- Wernicke-Korsakoff’s syndrome: a neurological disorder associated with severe thiamine deficiency that is usually caused by chronic alcoholism
- Alzheimer’s disease: a form of dementia that is associated with memory loss, cognitive impairment, language problems, and other neurological issues
- Traumatic brain injury, damage to specific regions of the brain like the inferior medial frontal lobe
- Schizophrenia: a mental health disorder that affects a person’s ability to recognize and understand reality, causing abnormal experiences and behaviors
How To Avoid It?
Research suggests that confabulation can be difficult to treat. The recommended approach to treatment depends on the underlying cause (if it is possible to identify the source).
In some instances, confabulation can be addressed with psychotherapeutic and cognitive-behavioral treatments. These approaches help individuals become more aware of the inaccuracies in their memory.
Techniques that encourage a person to question what they do and do not remember can also be useful. People are asked to respond that they do not know something or that they are not sure rather than confabulate a response.
B. Clustering Illusion
The clustering illusion is the tendency to erroneously consider the inevitable “streaks” or “clusters” arising in small samples from random distributions to be non-random. The illusion is caused by a human tendency to underpredict the amount of variability likely to appear in a small sample of random or pseudorandom data.
Example: If you peek out of your window, can you see any clouds in the sky? Can you tell me how does the cloud looks like? Does it look like a camel, a kite, a group of children, or anything else? If you cannot spot any clouds today, you must have spotted a cloud before which looked exactly like some object or a thing. These are examples of clustering illusion. The funny trick played by the Impractical Jokers works as an experiment in itself.
Effects: The Clustering Illusion creates traps for marketers. If they figure out some meaningful patterns in a random jumble of information, they tend to wrongly generalize the same patterns onto a larger dataset. A winning streak may indicate the clustering exercise is sound, but it may also be a statistical anomaly.
Why Does It Happen?
Clustering illusion is caused by the representativeness heuristic, a cognitive shortcut whereby a small sample of data is assumed to be representative of the entire population from which it is derived. The human brain wants to see patterns and trends in data since they’re easier to comprehend and extrapolate conclusions from. In other words, if there seems to be a non-random pattern in a small subset of data, people tend to believe that the entire sample also contains that non-random pattern.
How To Avoid It?
The clustering illusion is easier to overcome compared to the other flaws of the mind. We identify patterns when we are adamant about finding one. So the simplest solution lies in not trying too hard or breaking your head. Do not be like the criminal who commits a crime for no benefit but just to look cool.
- Get more data: Random patterns seem apparent when you have a small set of data. You have to accept that little data is only little data. Either find more data or do not make predictions without enough data.
- Do not attribute a lot of importance to small data: When you make a prediction based on a small amount of information know that the chances of making a mistake are high. As much as possible avoid predicting with small amounts of data unless really necessary. Often, no decision is better than a wrong decision.
- Be doubtful: Approach decisions with a fair amount of skepticism. Ask yourself questions and do not convince yourself unless you have compelling proof or at least good enough data points. At the same time, do not doubt too much. In most cases, you will not have enough evidence to make a certain decision. Find the right balance between being delusional like seeing aliens and doubting too much like an insecure husband.
- Perform small experiments: If you have a hunch that seems right, test it such that the consequences are not massive. For example, if you want to buy a stock based on graphs you have analyzed, then buy in small values. Until you confirm that your learning and prediction are right, do not play with big money.
C. Illusory Correlation
- Illusory Correlation is when we see an association between two variables (events, actions, ideas, etc.) when they aren’t actually associated.
Example: An illusory correlation happens when we mistakenly over-emphasize one outcome and ignore the others. For example, let’s say you visit New York City, and someone cuts you off as you’re boarding the subway train. Then, you go to a restaurant and the waiter is rude to you. Finally, you ask someone on the street for directions and they blow you off. When you think back on your trip to New York it is easy to remember these experiences and conclude that “people from New York are rude” or “people in big cities are rude.”
Effects: It can also make us blind to correlations that really do exist. If we are focused on illusory correlations because we believe they are indeed true, we are less likely to look for other correlations that might actually be present. This can lead to missed opportunities and false conclusions.
Why Does It Happen?
There are two sorts of illusory correlations: expectancy-based and distinctiveness-based illusory correlations. The former occurs when we mistakenly see relationships due to our preexisting expectations surrounding them. The latter happens when a relationship is believed to exist between two variables due to focusing too much on information that stands out. Both of these illusory correlations can be attributed to our brain’s use of “heuristics” or mental shortcuts. Evaluating evidence takes time and energy, and so our brain looks for such shortcuts to make the process more efficient.
- The Availability Heuristic: This is our tendency to use information that comes to mind quickly and easily when making decisions about the future. As a result of the availability heuristic, variable pairings that come to mind easily (either because they appear, because they are quick to grasp, or because they seem likely), are seen as correlated. For example, ice cream, and gluten intolerance are mentioned together frequently, we might think they are correlated when they aren’t. This is because this pairing is more distinct than others, and will more readily come to mind when we look for correlations than pairings we haven’t seen before.
- Confirmation Bias: As another cognitive shortcut, confirmation bias, occurs when we notice, focus on, and give greater credence to evidence that fits with our existing beliefs. Confirmation bias has been linked to illusory correlation, as we look for relations that confirm our preexisting beliefs surrounding two variables. For example, if we believe flying is dangerous, we are more likely to expect correlations between increased flying and deaths related to transport.
How To Avoid It?
We should try to reduce illusory correlation because of the damaging effects it can have. A 2011 study found that illusory correlations can be reduced by understanding under what conditions our minds tend to misconceive relations. The researchers found that illusory correlations tend to occur “under conditions in which the participant is not personally involved.” In other words, we see false correlations in areas and circumstances that we have little knowledge or personal experience in.
As such, it was concluded that “developing evidence-based educational programs should be effective in helping people detect and reduce their own illusions.” Because we are particularly susceptible to illusory correlations in unfamiliar areas due to our lack of experience in them, we can reduce the bias by becoming more informed in those areas.
2. We fill in characteristics from stereotypes, generalities, and prior histories whenever there are new specific instances or gaps in information
When we have partial information about a specific thing that belongs to a group of things we are pretty familiar with, our brain has no problem filling in the gaps with best guesses or what other trusted sources provide. Conveniently, we then forget which parts were real and which were filled in.
A. Placebo Effect
The placebo effect is defined as a phenomenon in which some people experience a benefit after the administration of an inactive “look-alike” substance or treatment. This substance, or placebo, has no known medical effect. Sometimes the placebo is in the form of a pill (sugar pill), but it can also be an injection (saline solution) or consumable liquid.
Example: For example, if you get sick after eating a specific food, you may associate that food with having been sick and avoid it in the future. Because the associations learned through classical conditioning can affect behavior, they may play a role in the placebo effect.
Effects: While placebos can affect how a person feels, studies suggest that they do not have a significant impact on underlying illnesses. A major review of clinical trials involving placebos found that placebos had no major clinical effects on illnesses. Instead, the placebo effect had a small influence on patient-reported outcomes, particularly of perceptions of nausea and pain.
- Depression: The placebo effect has been found to impact people with major depressive disorder. In one study, participants who weren’t currently taking any other medication were given placebo pills labeled as either fast-acting antidepressants or placebo for one week. After the week, the researchers took PET scans and told the participants they were receiving an injection to improve their mood. Participants who took the placebo labeled as an antidepressant as well as the injection reported decreased depression symptoms and increased brain activity in areas of the brain linked to emotion and stress regulation.
- Pain management: A small 2014 study tested the placebo effect on 66 people with episodic migraine, who were asked to take an assigned pill—either a placebo or Maxalt (rizatriptan), which is a known migraine medication—and rate their pain intensity. Some people were told the pill was a placebo, some were told it was Maxalt, and others were told it could be either. Researchers found that the expectations set by the pill labeling influenced the participants’ responses. Even when Maxalt was labeled as a placebo, participants gave it the same rating as a placebo that was labeled Maxalt.
- Symptom relief: The placebo effect has also been studied on cancer survivors who experience cancer-related fatigue. Participants received three weeks of treatment, either their regular treatment or a pill labeled as a placebo. The study found that the placebo (despite being labeled as such) was reported to improve symptoms while taking the medication and three weeks after discontinuation.
Why Does It Happen?
Why do people experience real changes as a result of fake treatments? While researchers know that the placebo effect is a real effect, they do not yet fully understand how and why this effect occurs. Research is ongoing as to why some people experience changes even when they are only receiving a placebo. A number of different factors may contribute to this phenomenon.
- Hormone Response: One possible explanation is that taking the placebo triggered a release of endorphins. Endorphins have a structure similar to morphine and other opiate painkillers and act as the brain’s own natural painkillers. Researchers have been able to demonstrate the placebo effect in action using brain scans, showing that areas that contain many opiate receptors were activated in both the placebo and treatment groups. Naloxone is an opioid antagonist that blocks both natural endorphins and opioid drugs. After people received naloxone, placebo pain relief was reduced.
- Conditioning: Other possible explanations include classical conditioning, or when you form an association between two stimuli resulting in a learned response. In some cases, a placebo can be paired with an actual treatment until it evokes the desired effect. For example, if you’re regularly given the same arthritis pill to relieve stiff, sore joints, you may begin to associate that pill with pain relief. If you’re given a placebo that looks similar to your arthritis pill, you may still believe it provides pain relief because you’ve been conditioned to do so.
- Expectation: Expectations, or what we believe we will experience, have been found to play a significant role in the placebo effect. People who are highly motivated and expect the treatment to work may be more likely to experience a placebo effect. A prescribing physician’s enthusiasm for treatment can even impact how a patient responds. If a doctor seems very positive that a treatment will have a desirable effect, a patient may be more likely to see benefits from taking the drug. This demonstrates that the placebo effect can even take place when a patient is taking real medications to treat an illness. Verbal, behavioral, and social cues can contribute to a person’s expectations of whether the medication will have an effect.
- Behavioral: The act of taking a pill or receiving an injection to improve your condition
- Social: Reassuring body language, eye contact, and speech from a doctor or nurse
- Verbal: Listing to a health care provider talk positively about treatment
- Genetics: Genes may also influence how people respond to placebo treatments. Some people are genetically predisposed to respond more to placebos. One study found that people with a gene variant that codes for higher levels of the brain chemical dopamine are more prone to the placebo effect than those with the low-dopamine version. People with the high-dopamine version of this gene also tend to have higher levels of pain perception and reward-seeking.
B. Just-World Hypothesis
The just-world phenomenon is the tendency to believe that the world is just and that people get what they deserve. Because people want to believe that the world is fair, they will look for ways to explain or rationalize away injustice, often blaming the person in a situation who is actually the victim.
The just-world phenomenon helps explain why people sometimes blame victims for their own misfortune, even in situations where people had no control over the events that have befallen them.
Example: More modern examples of the just-world phenomenon can be seen in many places. The poor may be blamed for their circumstances and victims of sexual assault are often blamed for their attack, as others suggest that it was the victim’s own behavior that caused the assault.
Effect: The Just-World Hypothesis can have the following effects:
- Individual effects: On an individual level, there are ups and downs to the just-world hypothesis (also referred to as the just-world bias or just-world fallacy). Belief in a just world can motivate us to act with morality and integrity, which is commonly thought of as ‘keeping good karma’. However, the world is not always as righteous as we would hope. By holding tightly to the just-world hypothesis in the face of injustice, we are susceptible to making inaccurate conclusions and judgments about the world around us.
- Systemic effects: The way we decide what deserves punishment and what merits reward dictates how we see the world. This outlook, shared by most people to varying degrees, has significant effects on political and legal outcomes. Individual variances in the cognitive strength of the just-world hypothesis (how much we believe that the world is truly just) and response to apparent injustices (i.e. rationalizing, ignoring, or intervening) is echoed in political opinions, especially regarding attitudes towards political leaders, attitudes towards victims, and attitudes towards social activism.
Why Does It Happen?
We are socialized to believe that good is always rewarded and evil is punished. From early childhood, we read stories of courageous heroes saving the day and being rewarded with keys to the city, while villains are slain or banished. In these stories, the characters always reap what they sew. Research has shown that we develop this sense of justice expected to be inherent in the world relatively early on.
As humans, we are often faced with an overwhelming amount of information. To make sense of our surroundings, we construct cognitive frameworks to guide our decision-making and predict outcomes. The just-world hypothesis serves as one of these frameworks, creating an understanding of positive and negative occurrences by attributing them to a larger karmic cycle.
How To Avoid It?
Behavioral scientists Daniel Kahneman and Amos Tversky propose two disparate modes of thinking.
- System 1 refers to our knee-jerk responses, our quickly-made judgments, our emotional reactions.
- System 2 refers to a slower, more rational, more calculated thinking process.
Many of our biases are elicited through System 1 thinking, including the just-world hypothesis.
By understanding the two systems of thinking, we are better equipped to resist biases
Understanding the dual-processing mode of thinking can help us consciously hone in on the more analytic, System 2 type of thinking. A survey of various debiasing techniques found that they all shared a common thread of deliberately moving from System 1 thinking to System 2. Slowing down the process by which we make our judgments and considering all of the information at hand allows us to make better decisions.
With the just-world hypothesis, System 2 thinking means taking a step back to prevent ourselves from making distorted assessments. Sometimes after looking at the full picture we will still support our initial conclusion. Maybe we still feel that the punishment or reward at hand was warranted, and that is okay too. Working on de-biasing the just-world hypothesis does not mean telling ourselves that the world is never just. What we want to open our minds to is a new way of dealing with cognitive dissonance instead of always taking the easiest route. By simply using System 2 thinking, we can think critically, rather than instinctually. This will allow us to clearly see injustices and better prepare ourselves and the world around us to combat them.
So how do we slow down and start using System 2 thinking? Well, the answer to this is less clear-cut. Just like when we are learning a new physical skill, building positive mental practices takes time and repetition. We now know what the just-world hypothesis is and how it happens, so we can be more aware of it in ourselves. At first, we might retroactively realize when we are thinking in a biased manner, per se making a quick judgment about someone. Through examining our intuitive judgments and looking at the larger picture, we can cultivate proactive System 2 thinking.
One tool we can use to combat the negative attitudes towards victims sometimes unknowingly yielded by the just-world hypothesis is empathy.
C. Bandwagon Effect
The bandwagon effect is the term used to describe the tendency for people to adopt certain behaviors, styles, or attitudes simply because others are doing so. More specifically, it is a cognitive bias by which public opinion or behaviours can alter due to particular actions and beliefs rallying amongst the public.
Examples: Below is some examples of the Bandwagon Effect:
- Diets: When it seems like everyone is adopting a certain fad diet, people become more likely to try the diet themselves.
- Elections: People are more likely to vote for the candidate that they think is winning.
- Fashion: Many people begin wearing a certain style of clothing as they see others adopt the same fashions.
- Music: As more and more people begin listening to a particular song or musical group, it becomes more likely that other individuals will listen as well.
- Social Networks: As increasing numbers of people start using certain online social networking websites, other individuals become more likely to begin using those sites as well. The bandwagon effect can also influence how posts are shared as well as interactions within online groups.
Effect: The impact of these bandwagon trends is often relatively harmless, such as in fashion, music, or pop culture fads. Sometimes they can be far more dangerous. When certain ideas begin to take hold, such as particular attitudes toward health issues, bandwagon beliefs can have serious and damaging consequences.
Some negative or even dangerous examples of the bandwagon effect:
Individuals who were influenced by the anti-vaccination movement, for example, became less likely to get routine childhood immunizations for their children. This large-scale avoidance of vaccinations has been linked to outbreaks.
Researchers have found that when people learn that a particular candidate is leading in the polls, they are more likely to change their vote to conform to the winning side.
Why Does It Happen?
Some of the factors that can influence the bandwagon effect include:
- Groupthink: The bandwagon effect is essentially a type of groupthink. As more people adopt a particular fad or trend, the more likely it becomes that other people will also “hop on the bandwagon.” When it seems that everyone is doing something, there is a tremendous pressure to conform, which is perhaps why the bandwagon behaviors tend to form so easily
- A Desire to be Right: People want to be right. They want to be part of the winning side. Part of the reason people conform is that they look to other people in their social group for information about what is right or acceptable.4 If it seems like everyone else is doing something, then people are left with the impression that it is the correct thing to do.
- A Need to be Included: Fear of exclusion also plays a role in the bandwagon effect. People generally do not want to be the odd one out, so going along with what the rest of the group is doing is a way to ensure inclusion and social acceptance.
How To Avoid It?
Since the bandwagon effect is a cognitive bias, you can reduce its impact on you and on others by using appropriate debiasing techniques, which help you think and act in a rational manner. Such techniques include the following:
- Create distance from the bandwagon cues: For example, you can create physical distance from those cues by moving away from people who exert peer pressure before you make a decision, or you can create temporal distance by waiting for a day after talking to people before you make a decision.
- Create optimal conditions for judgment and decision-making: For example, before you make a decision that might be influenced by the bandwagon effect, go somewhere quiet, where you can properly concentrate while thinking about the situation.
- Slow down your reasoning process: This involves taking time to think through the situation in a slow and analytical manner, rather than relying on intuition or hurried reasoning.
- Make your reasoning process explicit: For example, if you’re debating whether to follow a certain course of action that’s associated with bandwagon cues, you can explicitly list its pros and cons, and then clearly verbalize what decision you’ve made and why.
- Hold yourself accountable for your decisions: Remind yourself that ultimately, you’re responsible for any decision that you make, even if that decision is prompted by the bandwagon effect or other types of social influence.
- Examine the bandwagon: For example, try to identify who’s promoting it and why they’re doing so (e.g. a marketer is promoting it because they’re trying to get people to buy their product).
- Recall similar situations in the bandwagon effect played a role: Thinking of similar situations in which you experienced the bandwagon effect can help you assess its current influence on you, identify the potential consequences of that influence, and remember that just because something appears popular, that doesn’t mean that it’s right or that it’s the best course of action.
- Consider alternative options: For example, try to identify one alternative course of action than the one suggested by the bandwagon cues, and consider its potential advantages.
- Create psychological self-distance: When considering how you should act in light of bandwagon cues, you can improve your ability to think rationally by creating psychological self-distance, for example by using self-distancing language and asking yourself “what should you do in this situation?”.
- Visualize the consequences of your decisions: Specifically, consider what the consequences would be if you followed the course of action suggested by the bandwagon effect, in terms of factors such as what would happen and how you would feel.
- Elicit external feedback: For example, you can talk to a trusted individual, who isn’t likely to be influenced by the particular bandwagon effect that you’re worried about, and ask them what they think about your reasoning process.
3. We imagine things and people we’re familiar with or fond of as better than things and people we aren’t familiar with or fond of
Similar to the above but the filled-in bits generally also include built-in assumptions about the quality and value of the thing we’re looking at.
A. Halo Effect
The Halo Effect is the tendency for positive impressions of a person, company, brand, or product in one area to positively influence one’s opinion or feelings in other areas.
Example: An example of the halo effect is when one assumes that a good-looking person in a photograph is also an overall good person. This error in a judgment reflects one’s individual preferences, prejudices, ideology, and social perception.
Effect: The halo effect may have an impact on a number of real-world settings.
- In Education: Research has found that the Halo Effect may play a role in educational settings. Teachers may interact with students differently based on perceptions of attractiveness. The halo effect can influence how teachers treat students, but it can also impact how students perceive teachers.
- In the Workplace: There are a number of ways that the halo effect can influence the perceptions of others in work settings. For example, experts suggest that the halo effect is one of the most common biases affecting performance appraisals and reviews. Supervisors may rate subordinates based on the perception of a single character rather than the whole of their performance and contribution.
- In Marketing: Marketers take advantage of the halo effect to sell products and services. When a celebrity spokesperson endorses a particular item, our positive evaluations of that individual can spread to our perceptions of the product itself.
Why Does It Happen?
The Halo Effect occurs because human social perception is a constructive process. When we form impressions of others, we do not solely rely on objective information, but we actively construct an image that fits in with what we already know. As a result, our general perceptions of people and things skew our ability to make judgments on other characteristics.
How To Avoid It?
While the halo effect may seem like an abstract concept that is hard to actively notice, there are many ways we can attempt to avoid the bias.
- Cognitive Debiasing: To minimize the influence of the bias, one can look to various cognitive debiasing techniques such as slowing down one’s reasoning process. For example, if you are aware of the halo effect, you can mitigate the effect of the bias by trying to create two possible impressions of people when you first meet them. Eventually, once you gain more information about the person, you will be able to choose which original impression was closest to how you have now come to see them.
- The Horns Effect: Although we should maintain an awareness of the halo effect, we should also look out for when the bias works in reverse—a psychological process called the horns effect. This cognitive bias causes our negative impression of someone or something in one area to change our impression of them in other areas. For example, if someone does not like the way a product looks, they will not buy the product despite the potential benefit that it could bring them.
B. Cheerleader Effect
The Cheerleader Effect, also known as the group attractiveness effect, is the cognitive bias that causes people to think individuals are more attractive when they are in a group.
Example: For instance, a woman might look at a photo of a football team and believe that this is an incredibly handsome group of men. However, show her the same individuals as single photos and she will more likely to see their physical flaws and rate them as less attractive.
Why Does It Happen?
This effect arises due to the interplay of three cognitive phenomena:
The human visual system takes “ensemble representations” of faces in a group.
- Perception of individuals is biased towards this average.
- Average faces are more attractive, perhaps due to “averaging out of unattractive idiosyncracies”.
- When all three of these phenomena are taken together, the individual faces will seem more attractive in a group, as they appear more similar to the average group face, which is more attractive than members’ individual faces.
C. Reactive Devaluation
Reactive devaluation refers to our tendency to disparage proposals made by another party, especially if this party is viewed as negative or antagonistic. This cognitive bias can serve as a major barrier in negotiations.
Example: For instance, a plan or idea is proposed by another employee with whom you’ve disagreed in the past. It would be normal to feel negative about the success of this proposal.
Effect: Reactive Devaluation can have the following effects:
- Individual effects: We inevitably face conflict in all realms of our lives. Whether we are in line for the grocery store, at home with friends and family, or in the workplace with colleagues, our wants and needs can clash with those of others. The ability to resolve disputes and negotiate terms is a necessary but difficult skill to cultivate. For many, reactive devaluation can serve as a significant cognitive hurdle in conflict resolution. If we are unable to hear out and objectively consider the proposals of others, we may find ourselves in harmful stalemates and costly circumstances.
- Systemic effects: The cognitive biases we experience on an individual level reverberate within our institutions and social systems. These irrational tendencies become embedded in institutional psychology and large-scale global conflicts. Negotiation between laborers and management shapes wages, working conditions, and company profits. Even further, negotiation between warring countries determines the fate of many. Thus, barriers to negotiation like reactive devaluation can have life-altering consequences.
Why Does It Happen?
We can rationally infer that someone offering concessions must also require significant gains in return. Yet, the rejection or devaluation based solely on that knowledge can be entirely irrational and result in poor decision-making. Depending on the situation, there are various cognitive processes underlying reactive devaluation.
- We can be limited by a “zero-sum” perspective: In situations of extreme enmity, the two parties can view the conflict as “zero-sum”, meaning the two sides are so diametrically opposed that a gain for one party equals a loss for the other. We can see this type of conflict in the common image of a scale weighing good and evil: any win for “evil” results in a loss for “good”. Thus, in a zero-sum game, any proposal made by an adversary is typically distrusted and dismissed. While oppositional tension can make the conflict feel black-and-white, it is often not truly the case. This perspective can fuel reactionary thinking and block resolution at the cost of both sides.
- We want what we can’t have: The popular saying “the grass is always greener on the other side of the fence” captures our tendency to desire what is just out of reach. Stanford psychology researchers Lee Ross and Constance Stillinger have found evidence that a change in preferences is another cognitive underpinning of reactive devaluation.
- We weigh our losses heavily: In their paper, behavioral economists Daniel Kahneman and Amos Tversky demonstrated that the displeasure we feel from a loss is much greater than the pleasure we feel from a gain of equal magnitude. The resulting fear of losses, otherwise known as loss aversion, is rooted in an evolutionarily advantageous instinct to take threats seriously and avoid potentially costly circumstances. However, in situations where our survival is not on the line, loss aversion can hinder us from making smart decisions and prevent us from taking gambles that are actually in our favor. Negotiations often entail making concessions in order to get certain things that we want. In response to such asks, our aversion to losses can cause us to devalue a proposal off the bat. As much as it hurts to give up some of our power or resources, it is ultimately to our benefit to view the negotiation within a broader scope.
4. We simplify probabilities and numbers to make them easier to think about
Our subconscious mind is terrible at math and generally gets all kinds of things wrong about the likelihood of something happening if any data is missing.
A. Zero-Sum Bias
Zero-sum thinking perceives situations as zero-sum games, where one person’s gain would be another’s loss. The term is derived from game theory. However, unlike the game theory concept, zero-sum thinking refers to a psychological construct—a person’s subjective interpretation of a situation.
Example: A child might mistakenly believe that they are in a zero-sum situation when it comes to the love that their parents feel toward them and their siblings, meaning that the love felt toward one child must come at the expense of the love felt toward the others.
Effect: Zero-sum bias is a cognitive bias towards zero-sum thinking; it is people’s tendency to intuitively judge that a situation is zero-sum, even when this is not the case. This bias promotes zero-sum fallacies, false beliefs that situations are zero-sum. Such fallacies can cause other false judgments and poor decisions. In economics, the “zero-sum fallacy” generally refers to the fixed-pie fallacy.
Why Does It Happen?
People experience the zero-sum bias for a variety of reasons, including a mistaken assumption that certain resources are limited, a mistaken belief in trade-off consistency, an overreliance on common correlations, an overreliance on previous experiences, and an inability to see other people’s perspective.
How To Avoid It?
To reduce the degree to which you experience the zero-sum bias, you need to identify cases where you assume that a certain situation is zero-sum and then assess the situation rationally in order to identify whether it is actually zero-sum, which you can do, for example, by asking yourself whether a resource under consideration is truly limited.
B. Survivorship Bias
Survivorship bias or survival bias is the logical error of concentrating on the people or things that made it past some selection process and overlooking those that did not, typically because of their lack of visibility. This can lead to some false conclusions in several different ways. It is a form of selection bias.
Example: If three of the five students with the best college grades went to the same high school, that can lead one to believe that the high school must offer an excellent education when, in fact, it may be just a much larger school instead. This can be better understood by looking at the grades of all the other students from that high school, not just the ones who made the top-five selection process.
Effect: Generally speaking, survivorship bias tends to create conclusions that are overly optimistic, and that may not be representative of real-life environments. The bias occurs because the “surviving” observations often tend to have survived due to their stronger-than-average resilience to difficult conditions, and leaves out other observations that have ceased to exist as a result of such conditions.
How Does It Happen?
A subtler source of survivorship bias appears when society turns its attention to successful individuals. Often our attention is drawn to people who achieve success ‘despite the odds’ or ‘take big risks’.
For example, a number of today’s billionaires – Bill Gates and Mark Zuckerberg, for example – achieved their success despite never going to or finishing university, a fact that has attracted considerable media attention.
How To Avoid It?
In order to prevent survivorship bias, researchers must be very selective with their data sources. Researchers must ensure that the data sources that they have selected do not omit observations that are no longer in existence in order to reduce the risk of survivorship bias.
C. Normalcy Bias
Normalcy bias, or normality bias, is a cognitive bias that leads people to disbelieve or minimize threat warnings. Consequently, individuals underestimate the likelihood of a disaster, when it might affect them, and its potential adverse effects.
Example: When disaster strikes, some people lose their heads, some people become cool and effective, but by far most people act as if they’ve suddenly forgotten the disaster. They behave in surprisingly mundane ways, right up until it’s too late.
Effect: It can result in the inability of people to cope with a disaster once it occurs. People with a normalcy bias have difficulties reacting to something they have not experienced before. People also tend to interpret warnings in the most optimistic way possible, seizing on any ambiguities to infer a less serious situation.”
Why Does It Happen?
The normalcy bias may be caused in part by the way the brain processes new data. Research suggests that even when the brain is calm, it takes 8-10 seconds to process new information. Stress slows the process, and when the brain cannot find an acceptable response to a situation, it fixates on a single and sometimes default solution that may or may not be correct. An evolutionary reason for this response could be that paralysis gives an animal a better chance of surviving an attack; predators are less likely to see prey that is not moving. In the fast-moving high-tech world, however, this default reaction can lead to disaster.
How To Avoid It?
Normalcy Bias can be reduced to much extent by
- Be much more pessimistic about the possibility and impact of disasters than you intuitively feel or can easily imagine — to overcome the normalcy bias’s challenges.
- Use effective strategic planning techniques to scan for potential disasters and address them in advance, as Brodie did with his new business plans.
- Of course, you can’t predict everything, so retain some extra capacity in your system – of time, money, and other resources – that you can use to deal with unknown unknowns, also called black swans.
- Finally, if you see a hint of a disaster, react much more quickly than you intuitively feel you should — to overcome the gut reaction’s dismissal of the likelihood and impact of disasters.