How mature is your demand forecasting process?

Vlerick Forecasting Research Centre fills the gap

Statistical forecasting models are becoming increasingly sophisticated and software is readily available to support the forecasting process, yet surveys* show that in only 25% of cases, demand forecasts are based exclusively on statistical models; 75% of the time they are the result of human judgement alone (25%), a statistical model adjusted by the forecaster(s) (33%) or an average of statistical and judgemental forecasts (17%). Human judgement therefore plays an important role. But how does it impact on forecasting accuracy? To date, little empirical research has been done in this area. The Vlerick Forecasting Research Centre fills this gap.

Sponsored by our Prime Foundation Partner SAS BeLux, the Forecasting Research Centre was established last year. It conducts research and regularly organises workshops for its members, all companies from various sectors that have at least one thing in common: they all want to enhance their demand or sales forecasting accuracy in order to improve their production planning and inventory management, and thus achieve a more efficient and effective supply chain. Current members are AGC, Axalta, Barry Callebaut, Bridgestone, Colruyt, Danone, Eurocontrol and Van de Velde.

Six areas of forecasting

The Centre’s activities during the past year have resulted in a forecasting maturity assessment tool that enables companies to assess their current forecasting process against best practices, set a benchmark and identify opportunities for improvement. The tool consists of a survey asking respondents to rate their organisation or business unit on a series of best practices in forecasting, 34 in total, which have been categorised into six areas:

  • Data: how forecasting data is stored, managed and updated
  • Method: what methods and models, with or without human judgement, are used in the forecasting process
  • Performance: how the forecasting process is measured and controlled
  • System: what systems support the forecasting process
  • People: how experienced and trained the organisation’s forecasters are
  • Organisation: how the organisation supports the forecasting process

No bias

Is this tool not prone to self-assessment bias? “Not really,” says Karlien Vanderheyden, Professor of Organisational Behaviour and one of the people leading the Centre’s multidisciplinary team of researchers. “This tool doesn’t probe for socially desirable behaviour. The respondents are critical of themselves because they want to improve things. Moreover, because the forecasting process cuts across functional boundaries, the survey is typically completed by people from different business units, such as marketing & sales, finance, logistics and production. So the tool gives a balanced view.”

What about human judgement?

Having mapped the forecasting process as a whole, the Centre is now set to analyse the conditions that influence the impact of human judgement on forecasting accuracy. “Academic literature suggests human judgement tends to improve forecasting accuracy. Even the most sophisticated statistical models struggle to take into account unexpected events, unstable environmental factors or missing data, whereas humans can readily process this contextual information,” explains Karlien. “However, it’s not as simple as that. The literature and the empirical studies conducted with students also indicate that humans often make unnecessary adjustments, thereby lowering the forecasting accuracy.”

Next steps

“Our member companies often use human judgement in their forecasting process, but they are well aware of the lack of academic and empirical foundation. We want to understand when human judgement adds value and why,” she says. “To this end, we’ll collect and compare data from our members on (1) forecasting results from their statistical models, (2) forecasting results after human adjustments and (3) actual demand or sales.”

Since the organisational context has a significant impact, the Centre will also analyse information such as date and time horizon of the forecasts, volatility of the products or services, whether or not feedback was given on the accuracy of forecast, whether the reasons for adjusting model forecasts were systematically logged, the number of people involved in the forecasting process, the forecasters’ profiles, etc.

Karlien concludes: “All this will enable us to establish useful guidelines to enhance forecasting accuracy as well as further improve the forecasting process.”

Help us to help you!

Our online forecasting maturity assessment tool is freely available. If you want to know where your company or business unit stands and identify opportunities for improvement, contact Rein Robberecht ([email protected] or +32 9 210 98 12) and take the survey. You will also be helping us to enhance our database, which will result in an even better benchmark. You can also contact Rein if you want to become a member of the Vlerick Forecasting Research Centre or if you have any questions about our research.

* For example, Fildes, R. & Goodwin, P. (2007). Good and Bad Judgment in Forecasting Lessons from Four Companies, Foresight (Fall 2007), 5-10. Results confirmed by a survey organised by Foresight in 2014; see Foresight (Winter 2015), 5-12.