Guest post from Tony Levy, IBM Business Analytics
We continue our conversation with Steve Morlidge, former Controller of Unilever UK, and co-author of the renowned book, “Future Ready – Mastering Business Forecasting.” Read Part 1 here.
Tony Levy (TL): Forecast models help us represent the business in order to anticipate its direction and make good decisions. What is the challenge here, and how should finance teams be thinking about their planning models?
Steve Morlidge (SM): The first thing we need to recognize is that every forecast is the product of a model – whether we realize it or not. It may be a mathematical model, such as a set of Excel formulae. It could be a statistical model extrapolating from past performance (e.g., a moving average). The most common form of model in use in finance however is judgment based models, such as an estimate of spending over the next year. All models have a role to play and the trick is to recognize and exploit the strengths and take steps to mitigate the weaknesses of the chosen method. In particular, whenever judgment is used we need to guard against the likelihood that our forecasts will be biased – systematically too high or too low. In my experience, bias is the biggest problem for financial forecasters.
TL: Speaking of forecast bias, can we measure bias, and how should we think about forecast measurement in general?
SM: Measuring bias is almost childishly simple, but very few companies do it. Put simply, if you have a sequence of four or more forecast errors of the same type (e.g., over or under forecast) there is a high probability (90 percent plus) that your forecast process is biased. Most companies do not measure error forecast error at all. Those that do often use percentage error and arbitrary targets (e.g., 10 percent), which is an approach that ignores bias and takes no account of the ease or difficulty of forecasting. Even worse, some businesses implicitly encourage bias by saying things like ‘we only want nice surprises.’ Frankly, the whole area is a complete mess and it is no surprise that political gaming and performance shocks are so common.
TL: Forecast reliability has become such a challenge because of heightened volatility in business. How should we account for uncertainty and risk in our forecasts?
SM: I believe that the first step is to explicitly distinguish between risk and uncertainty. Risk is the result of random variation around a forecast which, because it is always there, can be estimated based on historic forecast performance. For instance, doubling the average error is a good rule of thumb for a 90 percent confidence limit. Uncertainty is the possibility that there will be a radical shift in performance – usually as the result of external factors. The probability scale and timing of such discontinuities can never be forecast but explicitly factoring them into the forecast process and building contingency plans to mitigate the risk or exploit the opportunity they present gives your business a heads start.
For more information:
· Learn more about IBM’s solutions in forecasting
· Read the whitepaper, “7 Symptoms of Forecasting Illness,” and spot the most common forecasting ailments & follow the prescription to good health
· Register to receive IBM’s newsletter: “Finance and Beyond”