IKM

Please turn your device

Handling Complexity

Complex systems have always existed, of course—and business life has always featured the unpredictable, the surprising, and the unexpected. But complexity has gone from something found mainly in large systems, such as cities, to something that affects almost everything we touch: the products we design, the jobs we do every day, and the organizations we oversee. Most of this increase has resulted from the information technology revolution of the past few decades. Systems that used to be separate are now interconnected and interdependent, which means that they are, by definition, more complex.

Complex organizations are far more difficult to manage than merely complicated ones. It’s harder to predict what will happen, because complex systems interact in unexpected ways. It’s harder to make sense of things, because the degree of complexity may lie beyond our cognitive limits. And it’s harder to place bets, because the past behavior of a complex system may not predict its future behavior. In a complex system the outlier is often more significant than the average.

Making matters worse, our analytic tools haven’t kept up. Collectively we know a good deal about how to navigate complexity—but that knowledge hasn’t permeated the thinking of most of today’s executives or the business schools that teach tomorrow’s managers. How can we bring that knowledge to the fore?

Let’s take a close look at what complexity is, the problems it raises, and how those problems can be addressed.

COMPLICATED VERSUS COMPLEX

It’s easy to confuse the merely complicated with the genuinely complex. Managers need to know the difference: If you manage a complex organization as if it were just a complicated one, you’ll make serious, expensive mistakes.

Let’s back up and start with simple systems. These contain few interactions and are extremely predictable. Think of switching a light on and off: The same action produces the same result every time.

Complicated systems have many moving parts, but they operate in patterned ways. The electrical grid that powers the light is complicated: There are many possible interactions within it, but they usually follow a pattern. It’s possible to make accurate predictions about how a complicated system will behave. For instance, flying a commercial airplane involves complicated but predictable steps, and as a result it’s astonishingly safe. Implementing a Six Sigma process can be complicated, but the inputs, practices, and outputs are relatively easy to predict.
Complex systems, by contrast, are imbued with features that may operate in patterned ways but whose interactions are continually changing. Three properties determine the complexity of an environment. The first, multiplicity, refers to the number of potentially interacting elements. The second, interdependence, relates to how connected those elements are. The third, diversity, has to do with the degree of their heterogeneity. The greater the multiplicity, interdependence, and diversity, the greater the complexity. An organic growth program, for example, is highly complex—it contains a large number of interactive, interdependent, diverse elements.

Practically speaking, the main difference between complicated and complex systems is that with the former, one can usually predict outcomes by knowing the starting conditions. In a complex system, the same starting conditions can produce different outcomes, depending on the interactions of the elements in the system. Air traffic control, a complex system, constantly changes in reaction to weather, aircraft downtime, and so on. The system is predictable not because it produces the same results from the same starting conditions but because it has been designed to continuously adjust as its components change in relation to one another.

It’s possible to understand both simple and complicated systems by identifying and modeling the relationships between the parts; the relationships can be reduced to clear, predictable interactions. It’s not possible to understand complex systems in this way, because all the elements are interacting continuously and unpredictably.

IMPROVED FORECASTING METHODS

Drop certain forecasting tools.
Embedded in many analytic tools are two assumptions that don’t hold for complex systems. The first is that observations of phenomena are truly independent; this is often not the case in complex systems, with their highly interconnected parts. (Think of the well-known “butterfly effect,” when something small that happens early in a chain of events causes disproportionate consequences by the end.) The second is that it’s possible to extrapolate averages or medians to entire populations. Take a controversial case in medicine—the U.S. Food and Drug Administration’s deliberations (ongoing as of this writing) over whether to withdraw approval for the use of the drug Avastin in treating breast cancer. The issue has caused an uproar among the estimated 17,000 U.S. women who take the medication. Follow-up clinical trials revealed some potentially serious side effects and failed to show that the drug helps the statistically average patient. However, many doctors and patients have suggested that it prolongs life and improves quality of life in certain patients and completely cures a few. Cancer treatment is a complex system, but the agency is applying the logic of a complicated one.

In business, the problem shows up when companies try to predict customer behavior on the basis of average responses. On average, people loved New Coke, but the product ultimately flopped. It shows up when they fail to consider that outliers are often more interesting than the average case. And it shows up when they fail to account for the future importance of early events. Boston Scientific paid a huge amount for the cardiovascular device manufacturer Guidant, despite revelations during the bidding process of quality problems and cover-ups. Had it understood that those revelations signaled deeper problems going back many years, it could have avoided overpaying for a company it then had to pour vast resources into fixing. Boston Scientific’s stock has yet to recover.
And in complex systems, events far from the median may be more common than we think. Tools that assume outliers to be rare can obscure the wide variations contained in complex systems. In the U.S. stock market, the 10 biggest one-day moves accounted for half the market returns over the past 50 years. Only a handful of analysts entertained the possibility of so many significant spikes when they constructed their predictive models.

Simulate the behavior of a system.
Instead of extrapolating from irrelevant medians, look for modeling that will give you insight into the system and the ways in which its various elements interact. Examples include the customer-relationship-management models used by telecommunications companies to anticipate a person’s vulnerability to defection, and the data-mining tools used to predict consumer responses to various types of advertising. Further, make sure that your forecasting models incorporate low-probability but high-impact extremes. The complexity researchers Pierpaolo Andriani and Bill McKelvey observed that 16,000 minor earthquakes occur in California every year, but a really big one happens only once every 150 or 200 years. The average earthquake, then, is not very dangerous. It would be foolhardy, though, to base building codes on the average quake when what matters most is the big one. So, too, in business: What matters most may be the extreme but rare possibility, not the most likely one.

Use three types of predictive information.
If it’s impossible to predict the future in a complex system with a high degree of accuracy, and if organizations must nonetheless place bets with the future in mind, what’s the wisest course for leaders who need to put some stakes in the ground? How can they find a happy medium between excessive and convoluted scenarios about what might happen and linear predictions that are over-reliant on past knowledge? We advise managers to be explicit about what they think will be applicable from past experience and what might be different this time around. One way to do this is to divide your data among three buckets:

• Lagging: data about what has already happened. Most financial metrics and key performance indicators fall into this bucket.

• Current: data about where you stand right now. Your pipeline of opportunities might be in this bucket.

• Leading: data about where things could go and how the system might respond to a range of possibilities.

If the bulk of your information is in the lagging bucket, that’s a warning sign. Basing decisions mainly on lagging indicators is essentially betting that the future will be like the past. At least some of your information should be in the leading bucket. This information will be fuzzy and subjective by definition: The future hasn’t happened yet. But without it, you’re apt to be blindsided by change.

For an example of how the leading bucket prompted action to avert a possible system failure, recall the Y2K dilemma—the concern that computers would go haywire at the turn of the century because many used a two-digit year format. Early programmers expected that the software they created would be completely overhauled long before the millennium rolled over, but many critical legacy systems using the two-digit format remained (a fact we would place in the lagging bucket). The catastrophic scenarios in the leading bucket were so vivid and plausible that enormous efforts were made to bring complex computer systems into compliance before the year 2000 arrived (the plans to this end would be placed in the current bucket). When the time came, only a handful of problems surfaced, most of them minor.
Note that while the bucket tool simplifies reality, it doesn’t assume away complexity, unlike traditional forecasting tools.

BETTER RISK MITIGATION

Minimizing risk is crucial for anyone in charge of a complex system, and traditional approaches aren’t good enough. Managers must learn to:

It’s possible to eliminate this guesswork by designing a system that puts users in charge of the decisions, allowing them to create the outputs they want. Lulu, for example, has upended the traditional publishing model by giving writers control over key elements of the process. In the conventional model, publishers pay authors an advance and print books without knowing how many copies will sell. In the Lulu model, authors upload content to the company’s website and name their price. The books (or other outputs) are printed only after customers visit the site and decide to buy them. The authors receive 80% of the revenue—more per copy than is typical—and Lulu avoids the risk of printing books that end up on the remainder table or in warehouses, or being destroyed. By structuring the decision process so that books are produced and funds change hands only when a buyer is ready to pay, Lulu has more or less eliminated the danger of getting it wrong.

Boeing’s wildly successful 777 aircraft series exemplifies this principle at a much higher level of product complexity. The company engaged eight major airlines to help with the development process, producing iterative models whose design evolved according to these customers’ input. It used advanced visualization techniques such as 3-D modeling to reduce unexpected interactions between airplane systems and capture feedback as early as possible.

Use decoupling and redundancy
Sometimes elements of a complex system can be separated from one another to decrease the systemic consequences if something goes wrong. Decoupling yields two benefits: It shields parts of the organization from the risks of an unexpected event, and it preserves parts that may be needed to mount a response. Contrast the Windows operating system with Software as a Service (SaaS) applications. With Windows, the operating system and your data are tightly entwined; when you upgrade to a new version of the system, all your information is erased, meaning that you need to back it up and reload it to your computer. With SaaS, uniform interfaces tell the computer where your data are. You can upgrade away, and the data won’t be touched. And because the software and the data are uncoupled, the risk that both will be harmed simultaneously is significantly reduced.

Elements can also be designed to substitute for one another in case part of the system goes down. Intentional redundancy makes it more likely that the system can continue to operate to at least some degree even when portions of it are challenged. Decoupling and redundancy involve added expense, but the investment can be worthwhile.

Of course, there are limits to the decoupling and redundancy you can contain (and afford) within a single organization. You may need to call on external resources to expand the adaptive responses your organization can muster. The consultancy Accenture, for example, has an extensive network of partners to whom it can quickly turn if a client has an unanticipated need that Accenture cannot address. It also uses partnerships (including an arrangement with one of us, Rita) to conduct research that might not be part of its mainstream business but could yield early warnings of interest to its clients.
Draw on storytelling and counterfactuals.
Another aspect of mitigating risk is making sure that people view unlikely but potentially catastrophic future events as real. Sharing anecdotes about near misses and rehearsing responses to a hypothesized negative event can help focus attention on a possibly significant future occurrence. Posing counterfactuals—asking “What if?”—is a terrific but surprisingly underutilized way of coming up with scenarios that are unlikely to be surfaced by traditional techniques. In business, “soft” approaches like these are valued less than the supposedly more rigorous activity of number crunching. We instinctively associate stories and counterfactuals with literature and fantasy and look to data for science, reason, and truth. But when traditional methods repeatedly fail to make sense of the rare and unexpected (precisely the things that most interest us), it’s time to reconsider. Stories can give us great insights into complex systems, partly because the storyteller’s reflections are not restricted by the available data.

Triangulate
As powerful as storytelling is, it comes with a disadvantage. The sky’s the limit as far as our imagination goes—and therein lies the problem. There are no boundaries around where we should look or when we should stop looking. That’s where triangulation comes into play.

Triangulation means attacking a problem from various angles—using different methodologies, making different assumptions, collecting different data, or looking at the same data in different ways. One of the best ways to understand a complex system is to do precisely that. For example, comparing snapshots of various elements taken at a given point in time (an activity social scientists call cross-sectional analysis) yields a different understanding than looking at how a single element evolves over time. Or you can do both, studying how numerous elements evolve over time; in fact, this is the bread and butter of much sophisticated econometric and financial analysis. Despite its obvious advantages, triangulation had limited application until very recently, but the tools it requires have gotten better and easier to use.

Combining “soft” but flexible storytelling techniques with “hard” but rigid quantitative analyses can be an extremely powerful way to make sense of complex systems. The former help us explore unlikely but important possibilities and unintended consequences, while the latter give us concrete insights into the relationships of the system’s visible components. Managers confronted with complexity should avail themselves of both.

Smart Tradeoff Decisions
In a complicated environment, it’s relatively easy to make good tradeoffs: Simply figure out the optimal combination of elements and invest in those. It’s similar to an engineering problem. In complex environments, however, making good tradeoffs is more difficult. Two strategies can help.

Take a real-options approach
This means making relatively small investments that give you the right, but not the obligation, to make further investments later on. The goal is to limit your downside while maximizing the value you can capture on the upside. Gradually building a portfolio of small investments keeps the stakes low until you’re able to reduce the most significant uncertainties you face. A real-options strategy helps you manage failure by containing costs, not by eliminating risks (an approach Duke University’s Sim Sitkin and others have called “intelligent failure”). The idea isn’t to avoid making mistakes but to make them cheaply and early, learning from them and increasing your resilience as you go.

Ensure diversity of thought
What kinds of HR tradeoffs might you make if you realized you were dealing with a complex system rather than a merely complicated one? Complicated systems are like machines; above all, you need to minimize friction. Complex systems are organic; you need to make sure your organization contains enough diverse thinkers to deal with the changes and variations that will inevitably occur. Who in your company regularly talks to people you might not interact with yourself, comes up with things that are a little off the beaten track, and is attuned to underlying risks and trends that your other managers might overlook? In a complex system, finding the right people for the job means seeking out these sorts of thinkers (see the sidebar “A Counterintuitive Approach to Hiring” for an unusual but effective strategy).


In an unpredictable world, sometimes the best investments are those that minimize the importance of predictions.

We use strictly necessary cookies that are required for the operation of our website. We do not use analytical, marketing or third-party cookies. Review our Cookie Policy for more information.