By Andy Burrows
“Does anyone in FP&A ever review the accuracy of their predictive financial models from time to time?” That’s the question I posed on LinkedIn a while back, and it generated a good discussion. And it turns out that FP&A professionals generally care quite a lot about the accuracy of their forecasts.
One of the businesses I worked in was strongly linked to the travel industry. So, our business tended to fluctuate seasonally. But we never seemed to be able to get our P&L and volume forecasts right. It was low margin, high volume business. So, volumes made a big difference. It was so frustrating trying to predict things like the effect of Easter being early or late, public holidays, school holidays, and other factors like the economy, the weather, and exchange rates.
When we did variance analysis, the Managing Director would get frustrated with us, because sometimes the explanation for a variance would be that we’d neglected to include a factor in the forecast that we really should have known about (like having a different number of Saturdays in a month).
And so, at one stage, we even looked at computerised SPSS statistical modelling techniques as a way of helping.
I’m glad we didn’t go for computerised statistical modelling in the end. It was too expensive and I don’t think the end result would have been much more accurate. And even if it would have increased the accuracy of our forecasts, would that have helped us? Would it have led to different behaviour, different action plans? Or would we have gone down a nerdish rabbit hole, using our new toy to tweak forecast variables to achieve greater and greater accuracy?
What follows are some reflections on the problems of forecasting accuracy and how, in spite of the challenges, we can still get value out of trying to improve.
One of the things that complicates the comparison of actual performance against forecast, is we’re talking about performance, not purely experience.
When we forecast the weather, naturally there is nothing that anyone can do to influence the outcome. It just happens. It’s a forecast of experience. So, any variation between the weather we actually get, and the forecast, is due to the forecast being inaccurate. Improving weather forecast accuracy is all about understanding the factors that are affecting weather, and working out how to build them into a statistical model.
When we forecast performance, the one performing is supposed to be trying to influence the outcome. And then it becomes complicated and slightly circular.
For instance, a few years ago, if I was to predict that Usain Bolt would win all the 100m and 200m races he was to run in the following 2 years, you may have said that was a pretty reasonable forecast.
However, as we all know, he lost the last 100m competition of his career. So, my forecast would have been wrong.
Would that be because of something wrong with my forecasting process? Or because of Bolt’s performance? Should my forecasting process have predicted that Bolt’s performance would dip? Was it an unpredictable bad day? Or was it less about Bolt’s performance, and more about other competitors improving?
It’s the same with business. When our sales turn out different to forecast, is that because our modelling wasn’t sophisticated enough? Were we over optimistic? Or is it because the Marketing team have done really well or really badly? Are our sales people on a winning streak? Or is our pricing strategy wrong?
One person, who commented on the LinkedIn post I mentioned earlier, wondered “who should be blamed” for overpredicting gross revenue, for example.
My position is that no-one should be “blamed” for not forecasting what actually happened, for two reasons:
Firstly, as I used to say to my FP&A team, “forecasts and budgets are always wrong”. No one in the world is able to predict what’s going to happen with 100% accuracy.
And the main reason for making that rather obvious point is to stop ourselves from focusing the forecasting effort too heavily on the analysis and number crunching.
We have to build the models and do the analysis, sure. That takes effort and time. But refining and refining and refining, working on the model the whole time, focusing on the number that comes out the end, without actually discussing the implications with people in the business, is missing the point.
And that’s where my second reason comes in…
We have to remember the reasons why we do (P&L) forecasts:
And we have to ask ourselves, does improving accuracy enable us to fulfil these objectives any better?
Well, let’s start by asking whether total accuracy in forecasting (if that were possible) would help us at all.
And to be honest, the answer is ‘yes’. The better you can predict the future, the more confident you can be in planning, etc. And at one end of the scale, if it were possible to predict the future with perfect accuracy, we could give cast iron guarantees to our shareholders, and have 100% confidence in carrying out the business plan. And that would mean a very successful business, with investors always willing to put more money in!
But the Pareto principle undoubtedly applies – we probably get 80% of the accuracy of our models from 20% of the effort we put in – so getting that last 20% of accuracy has therefore taken 5 times as much effort.
And actually, the other economic principle – the “law of diminishing returns” – comes into play as well. It gets harder and harder (more time consuming and more expensive) to get more accurate. So, to push past our existing level of accuracy will be, on average, even harder to achieve.
What I’m saying is that it becomes a cost/benefit question. Are our forecasts good enough? What additional benefit would we get out of investing (in time, expertise and sophistication) to get greater accuracy?
Business forecasting pro, Steve Morlidge, also talks about “Good enough forecasting” in an article on his blog. And if you want an even deeper look into better forecasts, I’d recommend another of his articles – Why Bother Forecasting? – and you may want to take a look at his book, Future Ready: How to Master Business Forecasting.
Another problem we need to think about is spurious accuracy.
Ever heard the term, “spurious accuracy”? It’s sounds a little weird, but I kind of like the fact that it sounds like an oxymoron, but it isn’t.
It conveys the fact that sometimes we make our forecast models (or analysis in general) so complex that it looks like the answer should be highly accurate. But often all that we’ve done is to increase the likelihood of getting it wrong by increasing the number of factors involved in getting to “the answer”.
There is a temptation to think that adding more factors/variables into a model will increase its accuracy. But past a certain point, that’s not normally the case, unless you’re very careful. You have to take into account the variability (or reliability) of each of the factors you use, because the variability of the final output is a function of the variability (or reliability) of each of the factors used in the model.
What that means is that quite often the complexity of a model – the thing that makes it look like it should be accurate - is the very thing that makes it inaccurate – spurious accuracy.
So, if we find that our forecasts are not accurate, it may be because we’re making the models too complicated!
But it’s not just P&L forecasting that needs a predictive financial model.
And in fact, when I asked the question on LinkedIn I was thinking more about other financial models, such as the ones that support project business cases, acquisition appraisal, big contracts and partnership deals, etc.
When we’re trying to make a strategic decision about pricing a big contract, the viability of a project, or how much we should pay to acquire a business, accuracy is very important. We need to use financial models that build in all the relevant factors. But how do we know how accurate they are?
It would be good to know, wouldn’t it? Then we could refine the information we use, and the models we build, to make better decisions.
There are two problems, though.
Firstly, these models are normally custom-built in spreadsheets. One-off spreadsheets are normally throw-away, and don’t normally get revisited after the decision is made.
Secondly, the output of the decision-making models is not normally comparable directly, or easily, with anything in the accounting or MI systems.
What I mean is that if we predicted additional revenue, for example, from a deal, there may be no way of tracing any additional revenue we actually get to what caused it. How much of it was due to that deal? How much was due to other (perhaps unknown) factors?
Another problem is that sometimes the assumptions we have to use are not verifiable. For example, we may have to rely on information given to us during due diligence in an acquisition process. And this is the reason that there must be a link between the financial modelling and the legal contract processes in a big deal. Key assumptions that affect pricing or valuation might need warranties in the contract, and such like.
If you want to go deeper into the best ways to do financial modelling, and discover the best practice standards that exist to help you, then I’d recommend an article that Anders Liu-Lindberg and Lance Rubin recently published on LinkedIn. That’s a good introduction, and it contains several helpful links to get further help.
As we conclude, it may seem as if the accuracy of predictive models is too difficult to measure, assess and improve.
As we’ve seen above, it is difficult. Whether it’s too difficult to attempt is a question of cost vs benefit.
Having a forecast is better than not having a forecast. And having a more accurate forecast is better than a less accurate one.
And therefore, finding a systematic way of measuring the accuracy of your predictive models must also have some value.
Even if you don’t arrive at definitive conclusions, the process of the review will:
All in all, with all its difficulties, reviewing and assessing the accuracy of predictive financial models should be seen as a rewarding and beneficial process.
Good enough forecasting, blog article by Steve Morlidge
Why bother forecasting, blog article by Steve Morlidge
Future Ready: How to Master Business Forecasting, book by Steve Morlidge
Do You Model Off Against The Masters of Financial Modelling? – blog article by Anders Liu-Lindberg and Lance Ruben
Join our mailing list to receive the latest news and updates on new resources to help you make your Finance role add value in the business you work for
[Your information will not be shared with external parties for anything other than the provision of Supercharged Finance products and services.]
For regular emails containing tips and advice on working in Finance in business, as well as notification of new material from Supercharged Finance, just fill in your details and click the button below!