Why Good Leaders Make Bad Decisions
Decision making lies at the heart of our personal and professional lives. Every day we make decisions. Some are small, domestic, and innocuous. Others are more important, affecting people’s lives, livelihoods, and well-being. Inevitably, we make mistakes along the way. The daunting reality is that enormously important decisions made by intelligent, responsible people with the best information and intentions are sometimes hopelessly flawed.
Consider Jürgen Schrempp, CEO of Daimler-Benz. He led the merger of Chrysler and Daimler against internal opposition. Nine years later, Daimler was forced to virtually give Chrysler away in a private equity deal. Steve Russell, chief executive of Boots, the UK drugstore chain, launched a health care strategy designed to differentiate the stores from competitors and grow through new health care services such as dentistry. It turned out, though, that Boots managers did not have the skills needed to succeed in health care services, and many of these markets offered little profit potential. The strategy contributed to Russell’s early departure from the top job. Brigadier General Matthew Broderick, chief of the Homeland Security Operations Center, who was responsible for alerting President Bush and other senior government officials if Hurricane Katrina breached the levees in New Orleans, went home on Monday, August 29, 2005, after reporting that they seemed to be holding, despite multiple reports of breaches.
The reality is that important decisions made by intelligent, responsible people with the best information and intentions are sometimes hopelessly flawed.
All these executives were highly qualified for their jobs, and yet they made decisions that soon seemed clearly wrong. Why? And more important, how can we avoid making similar mistakes? This is the topic we’ve been exploring for the past four years, and the journey has taken us deep into a field called decision neuroscience. We began by assembling a database of 83 decisions that we felt were flawed at the time they were made. From our analysis of these cases, we concluded that flawed decisions start with errors of judgment made by influential individuals. Hence we needed to understand how these errors of judgment occur.
In the following pages, we will describe the conditions that promote errors of judgment and explore ways organizations can build protections into the decision-making process to reduce the risk of mistakes. We’ll conclude by showing how two leading companies applied the approach we describe. To put all this in context, however, we first need to understand just how the human brain forms its judgments.
How the Brain Trips Up
We depend primarily on two hardwired processes for decision making. Our brains assess what’s going on using pattern recognition, and we react to that information—or ignore it—because of emotional tags that are stored in our memories. Both of these processes are normally reliable; they are part of our evolutionary advantage. But in certain circumstances, both can let us down.
Pattern recognition is a complex process that integrates information from as many as 30 different parts of the brain. Faced with a new situation, we make assumptions based on prior experiences and judgments. Thus a chess master can assess a chess game and choose a high-quality move in as little as six seconds by drawing on patterns he or she has seen before. But pattern recognition can also mislead us. When we’re dealing with seemingly familiar situations, our brains can cause us to think we understand them when we don’t.
What happened to Matthew Broderick during Hurricane Katrina is instructive. Broderick had been involved in operations centers in Vietnam and in other military engagements, and he had led the Homeland Security Operations Center during previous hurricanes. These experiences had taught him that early reports surrounding a major event are often false: It’s better to wait for the “ground truth” from a reliable source before acting. Unfortunately, he had no experience with a hurricane hitting a city built below sea level.
By late on August 29, some 12 hours after Katrina hit New Orleans, Broderick had received 17 reports of major flooding and levee breaches. But he also had gotten conflicting information. The Army Corps of Engineers had reported that it had no evidence of levee breaches, and a late afternoon CNN report from Bourbon Street in the French Quarter had shown city dwellers partying and claiming they had dodged the bullet. Broderick’s pattern-recognition process told him that these contrary reports were the ground truth he was looking for. So before going home for the night, he issued a situation report stating that the levees had not been breached, although he did add that further assessment would be needed the next day.
Emotional tagging is the process by which emotional information attaches itself to the thoughts and experiences stored in our memories. This emotional information tells us whether to pay attention to something or not, and it tells us what sort of action we should be contemplating (immediate or postponed, fight or flight). When the parts of our brains controlling emotions are damaged, we can see how important emotional tagging is: Neurological research shows that we become slow and incompetent decision makers even though we can retain the capacity for objective analysis.
Like pattern recognition, emotional tagging helps us reach sensible decisions most of the time. But it, too, can mislead us. Take the case of Wang Laboratories, the top company in the word-processing industry in the early 1980s. Recognizing that his company’s future was threatened by the rise of the personal computer, founder An Wang built a machine to compete in this sector. Unfortunately, he chose to create a proprietary operating system despite the fact that the IBM PC was clearly becoming the dominant standard in the industry. This blunder, which contributed to Wang’s demise a few years later, was heavily influenced by An Wang’s dislike of IBM. He believed he had been cheated by IBM over a new technology he had invented early in his career. These feelings made him reject a software platform linked to an IBM product even though the platform was provided by a third party, Microsoft.
Why doesn’t the brain pick up on such errors and correct them? The most obvious reason is that much of the mental work we do is unconscious. This makes it hard to check the data and logic we use when we make a decision. Typically, we spot bugs in our personal software only when we see the results of our errors in judgment. Matthew Broderick found out that his ground-truth rule of thumb was an inappropriate response to Hurricane Katrina only after it was too late. An Wang found out that his preference for proprietary software was flawed only after Wang’s personal computer failed in the market.
Compounding the problem of high levels of unconscious thinking is the lack of checks and balances in our decision making. Our brains do not naturally follow the classical textbook model: Lay out the options, define the objectives, and assess each option against each objective. Instead, we analyze the situation using pattern recognition and arrive at a decision to act or not by using emotional tags. The two processes happen almost instantaneously. Indeed, as the research of psychologist Gary Klein shows, our brains leap to conclusions and are reluctant to consider alternatives. Moreover, we are particularly bad at revisiting our initial assessment of a situation—our initial frame.
Our brains leap to conclusions and are reluctant to consider alternatives; we are particularly bad at revisiting our initial assessment of a situation.
An exercise we frequently run at Ashridge Business School shows how hard it is to challenge the initial frame. We give students a case that presents a new technology as a good business opportunity. Often, a team works many hours before it challenges this frame and starts, correctly, to see the new technology as a major threat to the company’s dominant market position. Even though the financial model consistently calculates negative returns from launching the new technology, some teams never challenge their original frame and end up proposing aggressive investments.
Raising the Red Flag
In analyzing how it is that good leaders made bad judgments, we found they were affected in all cases by three factors that either distorted their emotional tags or encouraged them to see a false pattern. We call these factors “red flag conditions.”
The first and most familiar red flag condition, the presence of inappropriate self-interest, typically biases the emotional importance we place on information, which in turn makes us readier to perceive the patterns we want to see. Research has shown that even well-intentioned professionals, such as doctors and auditors, are unable to prevent self-interest from biasing their judgments of which medicine to prescribe or opinion to give during an audit.
The second, somewhat less familiar condition is the presence of distorting attachments. We can become attached to people, places, and things, and these bonds can affect the judgments we form about both the situation we face and the appropriate actions to take. The reluctance executives often feel to sell a unit they’ve worked in nicely captures the power of inappropriate attachments.
The final red flag condition is the presence of misleading memories. These are memories that seem relevant and comparable to the current situation but lead our thinking down the wrong path. They can cause us to overlook or undervalue some important differentiating factors, as Matthew Broderick did when he gave too little thought to the implications of a hurricane hitting a city below sea level. The chance of being misled by memories is intensified by any emotional tags we have attached to the past experience. If our decisions in the previous similar experience worked well, we’ll be all the more likely to overlook key differences.
That’s what happened to William Smithburg, former chairman of Quaker Oats. He acquired Snapple because of his vivid memories of Gatorade, Quaker’s most successful deal. Snapple, like Gatorade, appeared to be a new drinks company that could be improved with Quaker’s marketing and management skills. Unfortunately, the similarities between Snapple and Gatorade proved to be superficial, which meant that Quaker ended up destroying rather than creating value. In fact, Snapple was Smithburg’s worst deal.
Of course, part of what we are saying is common knowledge: People have biases, and it’s important to manage decisions so that these biases balance out. Many experienced leaders do this already. But we’re arguing here that, given the way the brain works, we cannot rely on leaders to spot and safeguard against their own errors in judgment. For important decisions, we need a deliberate, structured way to identify likely sources of bias—those red flag conditions—and we need to strengthen the group decision-making process.
Given the way the brain works, we can’t rely on leaders to spot and safeguard against their own errors in judgment.
Consider the situation faced by Rita Chakra, head of the cosmetics business of Choudry Holdings (the names of the companies and people cited in this and the following examples have been disguised). She was promoted head of the consumer products division and needed to decide whether to promote her number two into her cosmetics job or recruit someone from outside. Can we anticipate any potential red flags in this decision? Yes, her emotional tags could be unreliable because of a distorting attachment she may have to her colleague or an inappropriate self-interest she could have in keeping her workload down while changing jobs. Of course we don’t know for certain whether Rita feels this attachment or holds that vested interest. And since the greater part of decision making is unconscious, Rita would not know either. What we do know is that there is a risk. So how should Rita protect herself, or how should her boss help her protect herself?
The simple answer is to involve someone else—someone who has no inappropriate attachments or self-interest. This could be Rita’s boss, the head of human resources, a headhunter, or a trusted colleague. That person could challenge her thinking, force her to review her logic, encourage her to consider options, and possibly even champion a solution she would find uncomfortable. Fortunately, in this situation, Rita was already aware of some red flag conditions, and so she involved a headhunter to help her evaluate her colleague and external candidates. In the end, Rita did appoint her colleague but only after checking to see if her judgment was biased.
We’ve found many leaders who intuitively understand that their thinking or their colleagues’ thinking can be distorted. But few leaders do so in a structured way, and as a result many fail to provide sufficient safeguards against bad decisions. Let’s look now at a couple of companies that approached the problem of decision bias systematically by recognizing and reducing the risk posed by red flag conditions.
Safeguarding Against Your Biases
A European multinational we’ll call Global Chemicals had an underperforming division. The management team in charge of the division had twice promised a turnaround and twice failed to deliver. The CEO, Mark Thaysen, was weighing his options.
This division was part of Thaysen’s growth strategy. It had been assembled over the previous five years through two large and four smaller acquisitions. Thaysen had led the two larger acquisitions and appointed the managers who were struggling to perform. The chairman of the supervisory board, Olaf Grunweld, decided to consider whether Thaysen’s judgment about the underperforming division might be biased and, if so, how he might help. Grunweld was not second-guessing Thaysen’s thinking. He was merely alert to the possibility that the CEO’s views might be distorted.
Grunweld started by looking for red flag conditions. (For a description of a process for identifying red flags, see the sidebar, “Identifying Red Flags.”) Thaysen built the underperforming division, and his attachment to it might have made him reluctant to abandon the strategy or the team he had put in place. What’s more, because in the past he had successfully supported the local managers during a tough turnaround in another division, Thaysen ran the risk of seeing the wrong pattern and unconsciously favoring the view that continued support was needed in this situation, too. Thus alerted to Thaysen’s possible distorting attachments and potential misleading memories, Grunweld considered three types of safeguards to strengthen the decision process:
Injecting fresh experience or analysis. You can often counteract biases by exposing the decision maker to new information and a different take on the problem. In this instance, Grunweld asked an investment bank to tell Thaysen what value the company might get from selling the underperforming division. Grunweld felt this would encourage Thaysen to at least consider that radical option—a step Thaysen might too quickly dismiss if he had become overly attached to the unit or its management team.
Introducing further debate and challenge. This safeguard can ensure that biases are confronted explicitly. It works best when the power structure of the group debating the issue is balanced. While Thaysen’s chief financial officer was a strong individual, Grunweld felt that the other members of the executive group would be likely to follow Thaysen’s lead without challenging him. Moreover, the head of the underperforming division was a member of the executive group, making it hard for open debate to occur. So Grunweld proposed a steering committee consisting of himself, Thaysen, and the CFO. Even if Thaysen strongly pushed for a particular solution, Grunweld and the CFO would make sure his reasoning was properly challenged and debated. Grunweld also suggested that Thaysen set up a small project team, led by the head of strategy, to analyze all the options and present them to the steering committee.
Imposing stronger governance. The requirement that a decision be ratified at a higher level provides a final safeguard. Stronger governance does not eliminate distorted thinking, but it can prevent distortions from leading to a bad outcome. At Global Chemicals, the governance layer was the supervisory board. Grunweld realized, however, that its objectivity could be compromised because he was a member of both the board and the steering committee. So he asked two of his board colleagues to be ready to argue against the proposal emanating from the steering committee if they felt uncomfortable.
In the end, the steering committee proposed an outright sale of the division, a decision the board approved. The price received was well above expectations, convincing all that they had chosen the best option.
The chairman of Global Chemicals took the lead role in designing the decision process. That was appropriate given the importance of the decision. But many decisions are made at the operating level, where direct CEO involvement is neither feasible nor desirable. That was the case at Southern Electricity, a division of a larger U.S. utility. Southern consisted of three operating units and two powerful functions. Recent regulatory changes meant that prices could not be raised and might even fall. So managers were looking for ways to cut back on capital expenditures.
Division head Jack Williams recognized that the managers were also risk averse, preferring to replace equipment early with the best upgrades available. This, he realized, was a result of some high-profile breakdowns in the past, which had exposed individuals both to complaints from customers and to criticism from colleagues. Williams believed the emotional tags associated with these experiences might be distorting their judgment.
What could he do to counteract these effects? Williams rejected the idea of stronger governance; he felt that neither his management team nor the parent company’s executives knew enough to do the job credibly. He also rejected additional analysis, because Southern’s analysis was already rigorous. He concluded that he had to find a way to inject more debate into the decision process and enable people who understood the details to challenge the thinking.
His first thought was to involve himself and his head of finance in the debates, but he didn’t have time to consider the merits of hundreds of projects, and he didn’t understand the details well enough to effectively challenge decisions earlier in the process than he currently was doing, at the final approval stage. Williams finally decided to get the unit and function heads to challenge one another, facilitated by a consultant. Rather than impose this process on his managers, Williams chose to share his thinking with them. Using the language of red flags, he was able to get them to see the problem without their feeling threatened. The new approach was very successful. The reduced capital-expenditure target was met with room to spare and without Williams having to make any of the tough judgment calls himself.• • •
Because we now understand more about how the brain works, we can anticipate the circumstances in which errors of judgment may occur and guard against them. So rather than rely on the wisdom of experienced chairmen, the humility of CEOs, or the standard organizational checks and balances, we urge all involved in important decisions to explicitly consider whether red flags exist and, if they do, to lobby for appropriate safeguards. Decisions that involve no red flags need many fewer checks and balances and thus less bureaucracy. Some of those resources could then be devoted to protecting the decisions most at risk with more intrusive and robust protections.