Why is it that forecasters don't routinely provide error bands with their predictions? I doubt very much that the following dialogue is representative of what goes on in the real world:
Decision-maker: 'Economist, what's your forecast of GDP growth over the next year?'
Economist: '3.13 percent'
DM: 'Hmm. What are the chances that GDP growth will be negative?'
E: 'Does it matter?'
DM: 'No, I guess not. Good work!'
Offhand, I can't think of any real-world decision-making exercise where the point estimate is the only relevant piece of information. So why is it the only thing we see?
After all, it's not as though forecasters aren't aware of the many reasons why their predictions will almost certainly be incorrect:
- Even if they were perfectly certain about the structure of the economy, there are any number of future imponderables: political, technological, and natural (hurricanes, earthquakes, etc).
- All parameters are measured with error.
- The forecaster almost always has more than one model at hand - which one (or which combination) should be used?
- Forecasts have to be made in real time, with available data. Unfortunately, those data are invariably subject to significant revisions later on.
My own explanation is that most econometricians use classical (frequentist) methods. Even if all those issues could be dealt with, I'm pretty sure that the only results would be based on asymptotic approximations (that is, as the sample size goes to infinity). In this context, an honest answer to a request for an error band would sound like:
"If we apply our procedure repeatedly to samples whose size approaches infinity, 95% of the time we will produce an interval that would have a 95% chance of containing the actual future value."
The only reasonable answer to this is, of course, "Huh?"
Comments