As you follow the links you’ll think about how to measure accuracy, how accurate you can be, and even how accurate you should be.

There are pitfalls in being too accurate!

Before we can begin we need to be clear about what mean by forecast variance. Given a forecast of F and an actual call volume offered of A, the two most common definitions are:

a) (A – F) / F and

b) (A – F) / A

Consider the following figures:

Week 1 | Week 2 | Week 3 | |

Forecast | 10,000 | 10,000 | 10,000 |

Actual | 10,000 | 8,000 | 12,000 |

(A – F) / F | 0% | 20% | 20% |

(A – F) / A | 0% | -25% | -17% |

In many cases, it is preferred to express the forecast variation in relation to the calls answered. As the table above shows, expressing the variation in relation to the forecast means that a positive or negative variation leads to the same absolute variation in percentage terms, and many people simply find this easier to work with (consider the table if the actual volume had been 5,000 or 15,000. (A – F) / F is the definition we’ll use.

The answer to this question will be due in part to the randomness of the call arrivals. One would naturally expect greater accuracy from a forecast based on weekly actuals of 10,000, 10,003, 9,998, 10,005 and 10,002 than from a forecast based on weekly actuals of 10,000, 5,000, 19,500, 12,000…

This should immediately suggest a clue: we can track our actual data and look at the standard deviations within it. The smaller the standard deviation, the better you should expect your forecast accuracy to be; larger standard deviations suggest more unpredictable data.

Given that you are dealing with a system with an element of randomness in it, it’s appropriate to express forecast accuracy targets as confidence intervals, using a standard-deviation type calculation to set the intervals. For example, you might set a target of needing to be within 7% of the forecast on 85% of days, a target that allows for some randomness but limits exposure.

If your forecast has been built up (layered) using separate propensities for different customer lifecycle points, you might be able to combine the standard deviations of the component parts to produce an appropriate confidence interval that is tighter than simply taking the standard deviation of the total combined volume.

How accurate should a forecast be?

This might seem like an odd question, but in many larger call centres the fact that the forecast can be very accurate can actually lead to problems.

Imagine a very large call centre of, say, 5,000 agents taking broadly similar call volume with very smooth and predictable call flows across the days and from one week to another. Forecasting accuracy for such large centres can often be very good, so let’s assume we’re always no worse than 5% away from the forecast, and usually within 2-3%.

Call centres of this size typically have a low availability requirement to deliver the required service levels, and often have very good scheduling efficiency. A total availability of 5 to 7% would not be unusual. Suppose that we’re set up to expect availability of 5% on a day that happens to coincide with being 5% over forecast. In this situation, calls would queue for much of the day and the whole call centre would suffer with significant abandons.

Although it may seem appropriate to set forecast accuracy targets so tightly, it’s important that the operation retains sufficient levels of availability to cope with the random fluctuations that will inevitably come. A much smaller call centre with typical availability in double figures would have much less of a problem soaking up a 5% over-forecast position.

In the example of the large centre, we have a choice: we can either continue to develop better and better forecasts to reduce the variations, or we can increase the availability in the call centre to cope with the randomness inherent in the customer behaviour. At 5,000 people, however, a 1% increase in availability would require a further 50 people; it might be cheaper to hire more forecasters instead!

It’s in the forecaster’s interest to be rewarded at the highest aggregate level as it’s the easiest level to get right – errors in one area are often balanced by errors elsewhere. That said, even if he is rewarded at that level he should be operating at the lower levels and building forecasts from the bottom up. It’s easy to flatter yourself when you aggregate your forecasts; you need to view them separately.

Given that many contact types are difficult to forecast, targets along the lines of “within 5% on 85% of time periods” is often appropriate.