John Palmer at EclectEcon asks "What if bad forecasting led to losing one's job?":
[I]f I were making forecasts for a financial group or a private firm, one would expect I would lose my job if I made consistently bad forecasts.
There was a time when the Department of Finance would make its own projections when it brought down a budget, but as its credibility deteriorated during the 1990's (for some reason, deficit forecasts always turned out to be wildly over-optimistic), it decided to use private sector forecasts. This has had the effect of removing one element of controversy during budget time: the government can claim to be using the best information available. After all, if private sector forecasters are paid to generate good forecasts, then that's what they'll produce, right?
Then again, maybe not. There's a substantial literature on this topic; a good a place as any to start is this paper by Allan Gregory and James Yetman (ungated version available here). Here's a paragraph from the literature survey section:
[A] number of papers show that competing professional forecasters provide a somewhat homogeneous product. For example, McNees, 1979 and McNees & Ries, 1983 argue that the differences between macroeconomic forecasts are small, while Batchelor (1990) demonstrates that the differences are indistinguishable from random noise. Spiro, 1989 and Zarnowitz, 1984 also show that forecasts are clustered very closely together. As has been pointed out by several authors, forecasting is a competitive industry with few barriers to entry; so inferior forecasters are quickly driven from business. Batchelor and Dua (1990) do find that some forecasters seek to distinguish their forecasts from each other, taking a position of optimist or pessimist within a panel of forecasters, although the magnitude of this effect is very small relative to noise present in the forecasts. Lamont (2002) finds that as forecasters become older and more established, they produce more radical forecasts that are generally less accurate. Campbell and Murphy (1996) and others also point out that forecasts are so similar that often the range of forecasts does not include the actual data when published. This is also true in the panel used here. While Rich, Raymond and Butler (1992) find that forecast dispersion is generally correlated with uncertainty in the actual data, it is imperfectly so. The range of forecasts underestimates the degree of uncertainty facing forecasters, sometimes substantially. It follows that the degree of consensus, or agreement, found in panels of macroeconomic forecasts does not necessarily correspond to the level of information available to forecasters, or indeed the amount of uncertainty facing those forecasters.
Emphasis added.
It looks to me as though private sector forecasters face the same incentives as do fund managers: it's relative, not absolute, performance that matters. The worst case scenario for them is not to make a bad forecast; it's to make a forecast that is significantly worse than their competitors. If everyone gets it wrong, then there's no penalty: 'What can I say? We all got sideswiped by the recession.' There are gains to being the only one who gets it right, but only the best-established (and therefore the least risk-averse) forecasters will gamble on being the outlier.
Private sector forecasters may have a greater incentive to provide a forecast that matches those of their competitors than to provide one that is accurate.
I disagree with you here Stephen!
I find it tempting to think that dispersion of forecasts should be related somehow to the uncertainty of the thing being forecast, and intuitively there must be some relation, but it's possible to think of counterexamples.
For example, if all forecasters have the same information and same model, all will (should?) come to exactly the same forecast, so there will be zero dispersion of forecasts, regardless of the uncertainty of those point-forecasts. Example, we are all trying to predict the roll of a known-fair die, and we all predict the same 3.5, even though we all know that the outcome will range from 1 to 6.
Even if all forecasters have different private information, and the same model, if each knows the (preliminary) forecasts of others, they all should give the same forecast, because those tentative forecasts reveal the private information. Example, S and N are trying to forecast the sum of 3 coin tosses, where heads is 1 and tails is 0. It is common knowledge between S and N that S observes the first coin, and N observes the second coin, and neither observes the third coin. If S makes a preliminary forecast of 2, then N knows that the first coin was heads, and if N makes a preliminary forecast of 1, then S knows that the second coin was tails. Both then give the same final forecast of 1.5.
Even if they have different models, if each knows that his model may be wrong, and that the other's model may be right, the same sort of tatonnement of forecasts could result in all producing the same forecast.
I am just channeling that whole "agreeing to disagree" literature which arose from Aumann's 1976 paper (ungated survey paper here: www.econ.ucdavis.edu/faculty/bonanno/PDF/agree.pdf ). There's a whole blog devoted to just that subject: http://www.overcomingbias.com/
In other words, I don't find it surprising that actual outcomes are often outside the range of forecasts; I find it more surprising, theoretically, that there is any dispersion of forecasts at all. (Even if we ignore the whole "agreeing to disagree" question, why do they even have different information, or different models? Isn't all the data public? Aren't all the textbooks public?)
Even if there were no tatonnement to reveal the private information (so Aumann does not apply), the variance of forecasts should be a function of the extent to which that private information is uncorrelated, rather than the completeness of the information.
The one bit of evidence I do find important is "Lamont (2002) finds that as forecasters become older and more established, they produce more radical forecasts that are generally less accurate." It seems plausible that established forecasters would have a different loss function, and might be less prone to Keynesian banker beauty contest thinking (Keynes likened financial markets to a contest where each judge is trying to pick, not the most beautiful contestant, but the one the other judges will think most beautiful, and said that bankers don't care about making bad decisions, provided all the other bankers made the same bad decision). But then, on the other hand, the fact that established forecasters also give less accurate forecasts goes the other way, and suggests they differ from the consensus, not because they are more independent, but just because they are more wrong.
Perhaps we will just agree to disagree on this question. Ooops! :)
Posted by: Nick Rowe | December 04, 2008 at 05:00 AM
A long time ago, I wondered here why forecasters never (okay, very, very rarely) provide measures of uncertainty for their forecasts. If we had that information, we might be able to distinguish between the two explanations.
Posted by: Stephen Gordon | December 04, 2008 at 06:17 AM
Yes, they really ought to provide at least a range, or confidence interval. Strange that they don't. Maybe a wide confidence interval would be an admission that their forecasts are not very useful, and a narrow confidence interval would make it too easy to reject their forecasts ex post. By giving only a point expectation, if right they can say "Look, we nailed that one", and if wrong, they can say "Yes, there was a high degree of uncertainty surrounding our forecast".
Posted by: Nick Rowe | December 04, 2008 at 08:02 AM
Also, similar to your last comments, I wonder why forecasts often feel they have to place a single bet?
For example, take a housing market prediction that says, "we'll probably see 5% gains next year". This is a safe bet, as it falls within historical trends, even though there may be any number of elements that make that prediction far less than 100% certain. It's difficult to say "housing will decline 15% next year" as, even though there may be a number of factors that create that possibility, timing is always a bitch. So why not say, there's a 33% chance of a 0-15% decline, a 33% chance of 0% gain, a 33% chance of 5% gain? (as an example, obviously).
I'm no sophisticate and crapped out on highschool math, but even I can deal with a world of uncertainties, risks and probabilities, and hedge bets accordingly. If I think stocks will increase 10% this year, and bonds 5%, do I go all in on stocks? No. Because that guess on the 10% gain is framed in my brain as, say, a 50% likelihood, along with a 20% chance of no gain, a 20% chance of a 10% loss and a 10% chance of greater than 10% gain.
This is why the "no one could have predicted it" meme about the current meltdown is so annoying. Many were commenting on risks to the economy. But few would go out on a limb and say "next year the financial economy will tank colossally" even though it was plain to see there were significant systemic risks there, because in the short term it is more likely that the status quo will prevail rather than the radical change. Until it isn't, of course, and by then it's too late to make predictions.
It's like you want to shake these forecasters and say, well, HOW likely is your prediction, and how likely are other outcomes. If there is 95% confidence that housing will increase 5% this year, along with 5% likelihood of 0% increase, well OK. If it is 51%, and there is a 20% probability of a 5% decrease, a 20% probability of a 10% decrease and a 9% probability of a greater than 10% decrease, that's substantially different, and should have an impact on my decision making.
Posted by: wahoo | December 04, 2008 at 07:10 PM
Have we established that it's even possible to make accurate forecasts (i.e. ones that consistently do better than simple heuristic approaches past a reasonable level of statistical significance) with regard to macroeconomic factors? If not, then what do we even mean by a 'good' forecast - a lucky one?
Posted by: Declan | December 05, 2008 at 02:22 AM
Regarding private sector forecasts - there is a startling graph provided in the PBO's economic and fical update comparing past recessions deviation from peak real gdp with current average private sector forecasts. Bay Street is currently forecasting a downturn that is only slightly worse than the 2001 slowdown - seems very optimistic.
I've got the graph posted on my blog.
[edited to make the click-through link - SG]
Posted by: brendon | December 07, 2008 at 02:38 PM