In a recent exchange, Jim Hamilton of Econbrowser and Kash at Angry Bear discuss the perils and pitfalls of economic forecasting. They agree on many things, most especially on the importance of maintaining a certain level of humility. The best summary is given by Jim Hamilton: "Don't ask for too much of your forecast or your policy, and it won't disappoint you."
That's an indisputable conclusion, but it begs the question: how are decision-makers supposed to know when they're asking too much of the forecasts they have at hand?
If all they have to go on is the point estimate, then they can't: decision makers would have to use past forecasting performance as a guide for a particular model's reliability out of sample. But models get revised all the time, and in any case, most forecasters use a subjective blend of econometrics and intuition when they produce their numbers.
If forecasters are obliged to produce only a point estimate, then their problem becomes one of optimal choice under uncertainty, in which they will try to minimise expected loss. If their loss function is symmetric - that is, if an overestimate has the exact same consequences as an underestimate - then you'd expect the forecaster to produce an 'unbiased' prediction.
But how often is it the case that a forecaster really doesn't care about the sign of the forecast error? As I discuss here, the Department of Finance in Ottawa is well aware of the fact that an unexpected increase in the government balance is much less costly than an unexpected decrease, so it's hardly surprising that they've developed the habit of lowballing estimates for the govt surplus.
The same probably goes for predicting GDP growth rates. For example, Kash notes that forecasters have generally underestimated GDP growth rates over the past few years. If the consequences for an underestimate for GDP growth were the same as those for an overestimate, Kash - and the rest of us us - would be rightly concerned about this pattern. But I would venture to guess that the loss functions for GDP forecasters are not symmetric, and that they'd much rather be surprised by a higher-than-forecast GDP number than one that came in below their forecast. In the former case, there'd be backslaps all around, while in the latter, forecasters would have to deal with any number of people snarling 'why didn't you see this coming?'
The ideal solution to this problem would be to report the predictive distribution, so that decision-makers solve their own optimisation problem. But at this point, I'd settle for just one more number other than the point estimate: a standard deviation, an interquartile range, or some other measure that puts a number on the forecaster's sense of modesty.