Barometer Falling

By Patrick O'Brien | 2013 May

Topics: Governance, risk & compliance, Enterprise risk management

You have probably heard the joke that “weather forecasters are the only people to be wrong half the time and still get paid!” Most people assume that weather forecasters are just not very good at what they do. In reality, meteorology is an incredibly challenging field, and the fact that weather forecasts are sometimes way off has more do with predicting events in an incredibly complex system than the professional skills of weather forecasters.  An important component of a risk manager’s job involves forecasting trends, communicating the uncertainty of forecasts, and helping business people make risk-aware decisions. There are a lot of similarities between risk management and weather forecasting, and risk management professionals can learn a lot from studying how weather forecasters have approached the challenging task of predicting future events.


At a first glance, economic forecasting, weather forecasting and risk management appear to be very different, but all three disciplines share some common traits:

1)     The systems are dynamic, meaning that the behavior of the system at one point in time influences its behavior in the future.

2)     They are nonlinear, meaning their outputs are not directly proportional to their inputs.

You may have heard the phrase: the flap of a butterfly’s wings in Chile can set off a tornado in Nebraska. This statement refers to chaos theory which applies to systems that are both dynamic and nonlinear.

Dynamic systems give forecasters plenty of problems. World economies are continually evolving in a chain reaction of events making it very difficult to predict future outcomes. Nonlinear systems are problematic too: the mortgage-backed securities that triggered the financial crisis were designed in such a way that small changes in macroeconomic conditions could make them exponentially more likely to default. When you combine these two properties, you can have a real mess.

Furthermore, forecasts for these systems are highly susceptible to inaccurate or incomplete data. The most basic tenet of chaos theory is that a small change in initial conditions (the butterfly flapping its wings in Chile) can produce a large and unexpected divergence in outcomes (a tornado in Nebraska). So inaccurate or incomplete data (or inaccurate assumptions, as in the case of mortgage-backed securities) can have profound effects on forecasts. The magnitude of the prediction error grows quickly when the process is dynamic, since the outputs at one stage of the process are the inputs to the next stage.


Weather is the epitome of a dynamic system, and the differential equations that govern the movement of atmospheric gases and fluids are nonlinear. But weather forecasting is one of the success stories for science and forecasting; an example of humans and computers joining forces to simulate the complexities of nature and predicts its course.

Humans have always been trying to predict their environment, but it has taken a while for meteorology to develop into a successful science. Statistical predictions about the weather have long been possible. For example, if it rained today, what is the probability that it will rain tomorrow? A weather forecaster could look up all the past instances of rain in his database and give us an answer about that. Or he could look toward long-term averages – for instance, it rains about 35 percent of the time in London in March – and use that knowledge to make a prediction about whether it will rain tomorrow.

The problem is that these predictions are not very useful; they are not precise enough to tell you whether to wear a raincoat, let alone to forecast the path of a hurricane. To make weather forecasts more valuable, meteorologists had to move beyond statistical models and create models that simulate the physical processes that govern the weather.

Atmospheric scientists have long understood the chemistry and physics that govern weather systems. But simulating weather patterns involves solving a very large set of complex equations that require huge amounts of computing resources. One of the biggest breakthroughs in modern weather forecasting is directly related to the increase in available computing power over the last two decades. For example, the supercomputer labs at the National Center for Atmospheric Research (NARC) in Boulder Colorado use the IBM Bluefire supercomputer that can perform 77 trillion calculations per second. This enables researchers to run simulations programs where the necessary calculations can be performed fast enough to forecast how a particular storm will develop1.

One of the key differences between weather and the economy is that even though the meteorologist shares the economist’s problems of a dynamic system with uncertain initial conditions, she has a wealth of hard science to draw upon. Economics, on the other hand, is a much softer science. Economists may have a reasonably sound understanding of the basic systems that govern the economy, but the causes and effects of events are all blurred together; especially during market bubbles and crashes.  During these times, the feedback loops driven by human behavior overwhelm normal market activities.

Risk management lies somewhere between weather forecasting and economic prediction. The operational processes and systems that govern a large financial services firm are not as complex as the U.S.economy, but the science behind them is not as rigorous as the laws governing weather systems.

The story of hurricane Katrina is one of human ingenuity and human error. The National Hurricane Center nailed its forecast of Katrina; it anticipated a potential hit on New Orleans almost five days before the levees were breached, and concluded that the realization of a nightmare scenario was probable more than forty-eight hours in advance. Twenty or thirty years ago, this much advance warning was not possible, and, as a result, fewer people would have been evacuated. The HurricaneCenter’s forecast, and the steady advances made in weather forecasting over the past few decades, undoubtedly saved many lives. Unfortunately, not everyone listened to the forecast. About 80,000 New Orleanians – almost a fifth of the city’s population at the time – failed to evacuate the city and 1,600 of them died2.


The great progress that weather forecasters have made is due to their ability to understand the rules that govern weather systems; the models that have been built to simulate these systems, and the high powered computers than can perform the calculations fast enough to make the forecasts valuable. The key lesson for risk managers is that the better you can understand the laws that govern what you are modeling, the more accurate your forecasts will be.

This lesson is driving research into understanding risk correlation and causation. Correlation is a measure of the relationship between two risks. If the exposure of risk 1 changes, and the exposure of the risk 2 always changes proportionally, then the risks are correlated. The correlation can be positive, meaning that they change in the same direction, or it can be negative, meaning that they change in opposite direction. The strength of the correlation reflects how similar the percentage of change is in both risks (where 1 means perfect correlation and 0 means no correlation).

Causation tells us whether a change in one risk will cause a change in another risk. Correlation on its own does not imply causation. For instance, sales of snow blowers and ski equipment are positively correlated because both occur more often in the winter. However, an increase in the sales of snow blowers does not cause an increase in the sales of ski equipment; rather it is the amount of snowfall that is the cause of increases or decreases in sales.

Understanding correlation and causation between risks will enable risk managers to better understand the dynamics behind changes in risk exposure within their firms. Techniques such as temporal causal modeling (TCM) are being used to analyze historical data on thousands of business metrics, for identifying the correlation and causal relationships amongst key risk indicators, risks and loss events3. For example, TCM can help understand which metrics drive risk (causality) and how risks are correlated (both the strength and the direction of the correlation).












Figure 1 – Risk Map

Risk correlation and causation can be very important because fluctuations in risk exposure in one risk can help you understand how other risks will behave.

Advanced analytics such as TCM, coupled with expert judgment, can be used to construct risk maps as show in Figure 1.  In this example, risks have been grouped into five color-coded areas:  Operational, Strategic, Financial, Compliance, and IT/Technology. The lines drawn between risks are used to show correlation between risks. Risks with more connections (i.e. correlated with more risks) have large circles, and red arrows show an example of causation.

Risk maps can help managers avoid wading through hundreds of metrics trying to interpret raw data to gain insight into risk exposure and trends. Knowing that there is strong correlation and causation between customer satisfaction and customer churn, will provide risk managers with a better understanding of how customer risk exposure will behave.

Figure 2 illustrates how risk correlation and causation can assist risk managers in understanding how modest changes in some risks can have significant changes to downstream risks. The risks on the left hand side of Figure 2 in the “Origin” section are not correlated, and individually they are relatively benign. But increases in these risks affect key drivers in the “Pathways” section and, collectively, they have a large influence on the downstream Reputation risk. Concentration maps like the one in Figure 2 can help risk managers cast a wider net when monitoring risk indicators, and help create early warning systems for key risks.












Figure 2 – Risk Concentration


Until now, firms have focused their governance, risk and compliance (GRC) processes on the collection and administration of risk and compliance data to satisfy regulatory requirements. Now, with GRC practices maturing, and with access to reliable data, the focus is switching to using analytics to gain better insight into the firm’s risk landscape. The use of correlation and causation, along with other advanced analytics techniques, is enabling risk-aware decision making.

GRC applications are integrating with analytic engines to form the next generation of enterprise GRC management systems. Going forward, risk managers will have access to analytic insights that will allow them to gain a much deeper and broader view of the factors that affect risk exposure.

Predictive Risk Indicators

Key risk indicators (KRIs) can act as an early warning system for risk managers; signaling that the likelihood that a risk event may occur is increasing. Researchers and risk professionals at IBM are now working with predictive analytics technology to develop the next generation of KRIs based on predictive analytics.

Predictive analytics encompasses a variety of mathematical techniques that derive insight from data and enable you to move from reporting past events to predicting future events. It applies diverse disciplines such as probability and statistics, machine learning, and artificial intelligence to model future probabilities and trends. Applying predictive analytics to KRI data enables risk managers to have forward looking views into risk trends. For example, how is customer satisfaction trending and where do we think it will be in 6 months time?

Composite Risk Indicators

Determining cause and effect from economic statistics is extremely difficult, and one of the primary reasons that the economy is so challenging to predict. In addition, there are thousands and thousands of economic indicators to choose from. For example, the government produces data on 45,000 economic indicators each year, and private data providers track as many as four million statistics4.

To assist with the indicator explosion, composite indicators can be used to provide more meaningful metrics for tracking trends. Composite indicators represent an aggregate of several lower level indicators. For example, the Leading Economic Index is a composite of ten economic indicators published by the Conference Board and it is widely used by economists. Risk professionals can also benefit by tracking composite indicators, and reporting on these to senior management.

Multivariate analytic tools are being used to develop composite indicators that can give senior managers the ability to look at high level, composite indicators such as customer satisfaction. For example, in a wireless phone company (see Figure 3) you might be measuring dropped calls, service interruptions, customer complaints, service quality etc., and combining these to form a higher level customer satisfaction indicator. Analytics can help with the aggregation of basic indicators and understanding the impact each underlying indicator has on the composite indicator.












Figure 3 - Customer Satisfaction Underlying Indicators


Qualitative and Quantitative Indicators

One of the primary lessons in Michael Lewis’s Moneyball: The Art of Winning an Unfair Game is that statistical analysis and focus on the key indicators of a baseball player’s success, such as on base percentage (OBP), could create a competitive advantage for a baseball team when drafting new talent5. For a time, Moneyball was very threatening to people in the game; it seemed to imply that the scout’s job and livelihood was at stake. But scouts were never replaced by computers and, ten years later, the teams that are most successful at drafting baseball talent have found a way to combine quantitative statistical analysis with the softer qualitative information that the traditional scout can provide.

For risk managers, the implication is that we need to use every tool at our disposal as well when making risk-related decisions. There is an emerging set of technologies and analytic tools that deal with unstructured data (such as web pages, e-mails, and documents) which use a collection of methods known as “text analytics” to generate insights from these sources. For example, sentiment analysis uses text analytics to determine the sentiment or attitude of the writers of the text and then draws insight from those sentiments.

Social Media Analytics applies these technologies to analyze blog posts and comments found on publicly available websites. The primary use cases have been customer-related, enabling marketing professionals to engage their customers and stakeholders, assess and measure campaigns, and expand their business through different social media channels. These techniques are also being applied to enterprise data that sits in email, internal social media sites, and other applications to mine the internal corporate knowledge that resides in the minds and interactions of employees.

Risk management can expand the breadth of data used to define risk indicators and move beyond pure quantitative measures. These new tools will allow risk managers to starting tapping text, content and social media analytics to provide a broader view of the risk landscape within firms. For example, firms can analyze Twitter and Facebook content for reputational risk indicators such as negative customer reviews on product quality (see Figure 4).












Figure 4: Strategic Objective Risk Dashboard


Visualization is a unique resource that meteorologists use in the forecasting process. A visual inspection of a graphic showing the interaction between two variables is often a quicker and more reliable way to detect outliers in data than an automated statistical test. Humans have very powerful visual cortexes that enable us to parse through distortions in the data and recognize patterns and organization.  This skill is very important in weather forecasting, and similar visualization techniques can be used to understand how KRIs trend and compare over time.

In Figure 3 the radar chart that depicts customer satisfaction indicators is animated so that the user can see how the indicators are trending. The risk professional can compare the firm’s indicators to industry values and see how the firm compares from a benchmarking perspective. The concept is similar to interactive weather maps that allow you to visualize how radar, clouds and other factors change over time.


No forecast is complete without some description of uncertainty. In April 1997, the Red River flooded Grand Forks, North Dakota, overtopping the town’s levees and encroaching more than two miles into the city. The town’s citizens were aware of the flood threat for months and had plenty of time to shore up the floodwalls. The levees were built to handle a flood of fifty-one feet and the National Weather Service had predicted that the Red River would crest to forty-nine feet. So the town’s citizens thought they were safe. In actuality, the river crested to fifty-four feet and the resulting damage ran into the billions of dollars. Although, the Weather Service’s forecast was not perfect, a five foot miss, 2 months in advance, was pretty reasonable. Historically, the margin of error on the Weather Service’s forecast was about plus or minus nine feet. But in this case the Weather Service had explicitly avoided communicating the uncertainty in their forecast to the public, emphasizing only the forty-nine-foot prediction6.

Forecasters should be careful to include measures of uncertainty in their estimates. Instead of showing a single track line for a hurricane’s predicted path, hurricane forecasters will show the range of places where the eye of the hurricane is most likely to make landfall. This range is some times referred to as the “cone of uncertainty.” Risk managers should learn from this practice and provide their own “cone of uncertainty” for their risk exposure forecasts.


The weather forecasts you see on TV reflect a combination of computer and human judgment. Computer models are great tools for predicting the weather, but they are not a forecast in and of themselves. Meteorologists call weather models “numerical guidance” because they are there to help “guide” you in making a forecast. In addition, understanding how weather patterns behave in certain geographical regions enable local meteorologists to make more accurate forecasts than strictly following the computer model.

For risk managers, the lesson is that analytics is not a substitute for expert opinion and guidance. The two are complementary and can be used synergistically to provide better predictions and interpretation of key indicators. Furthermore, understanding the firm-specific business environment and internal control factors is a critical component of accurate risk forecasting.

Next Generation GRC Systems

The next generation of enterprise GRC systems will include advanced analytic capabilities. These tools will enhance decision making by providing a much deeper and broader view of risk factors. Risk managers will be able to predict the occurrence of risk events before they materialize by understanding how risks interact with one another, and recognizing the patterns of changing risk exposures that can result in catastrophic events. Tapping into social media data, both internal and external to the firm, will provide a broader view of key risk factors and it will enhance the monitoring of your firm’s risk exposure.

However, risk managers will need to be careful to not confuse models with forecasts and to provide statements of uncertainty along with their predictions. Risk managers should strive to produce accurate forecasts that are honest assessments of the exposures their firms face, and make sure that their firms are well prepared to handle the consequences when risk events occur.



1J. Silver, Nate. The Signal and the Noise. New York: The Penguin Press, 2012. 110.

2 Silver, Nate. The Signal and the Noise. New York: The Penguin Press, 2012. 109-110.

3 Arnold, Andrew, Yan Liu, and Naoki Abe. “Graphical Granger modeling for temporal causal modeling,” Proceedings of the 13th ACM SIGKDD international conference on Knowledge discovery and data mining, San Jose, California, 2007.

4 Silver, Nate. The Signal and the Noise. New York: The Penguin Press, 2012. 185.

5 Lewis, Michael. Money Ball : The art of winning an unfair game. New York: W.W. Norton, 2003.

6 Silver, Nate. The Signal and the Noise. New York: The Penguin Press, 2012. 177-179.