« When irrational panic becomes the new normal | Main | The Bank of Canada's Assets »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I disagree with you here Stephen!

I find it tempting to think that dispersion of forecasts should be related somehow to the uncertainty of the thing being forecast, and intuitively there must be some relation, but it's possible to think of counterexamples.

For example, if all forecasters have the same information and same model, all will (should?) come to exactly the same forecast, so there will be zero dispersion of forecasts, regardless of the uncertainty of those point-forecasts. Example, we are all trying to predict the roll of a known-fair die, and we all predict the same 3.5, even though we all know that the outcome will range from 1 to 6.

Even if all forecasters have different private information, and the same model, if each knows the (preliminary) forecasts of others, they all should give the same forecast, because those tentative forecasts reveal the private information. Example, S and N are trying to forecast the sum of 3 coin tosses, where heads is 1 and tails is 0. It is common knowledge between S and N that S observes the first coin, and N observes the second coin, and neither observes the third coin. If S makes a preliminary forecast of 2, then N knows that the first coin was heads, and if N makes a preliminary forecast of 1, then S knows that the second coin was tails. Both then give the same final forecast of 1.5.

Even if they have different models, if each knows that his model may be wrong, and that the other's model may be right, the same sort of tatonnement of forecasts could result in all producing the same forecast.

I am just channeling that whole "agreeing to disagree" literature which arose from Aumann's 1976 paper (ungated survey paper here: www.econ.ucdavis.edu/faculty/bonanno/PDF/agree.pdf ). There's a whole blog devoted to just that subject: http://www.overcomingbias.com/

In other words, I don't find it surprising that actual outcomes are often outside the range of forecasts; I find it more surprising, theoretically, that there is any dispersion of forecasts at all. (Even if we ignore the whole "agreeing to disagree" question, why do they even have different information, or different models? Isn't all the data public? Aren't all the textbooks public?)

Even if there were no tatonnement to reveal the private information (so Aumann does not apply), the variance of forecasts should be a function of the extent to which that private information is uncorrelated, rather than the completeness of the information.

The one bit of evidence I do find important is "Lamont (2002) finds that as forecasters become older and more established, they produce more radical forecasts that are generally less accurate." It seems plausible that established forecasters would have a different loss function, and might be less prone to Keynesian banker beauty contest thinking (Keynes likened financial markets to a contest where each judge is trying to pick, not the most beautiful contestant, but the one the other judges will think most beautiful, and said that bankers don't care about making bad decisions, provided all the other bankers made the same bad decision). But then, on the other hand, the fact that established forecasters also give less accurate forecasts goes the other way, and suggests they differ from the consensus, not because they are more independent, but just because they are more wrong.

Perhaps we will just agree to disagree on this question. Ooops! :)

A long time ago, I wondered here why forecasters never (okay, very, very rarely) provide measures of uncertainty for their forecasts. If we had that information, we might be able to distinguish between the two explanations.

Yes, they really ought to provide at least a range, or confidence interval. Strange that they don't. Maybe a wide confidence interval would be an admission that their forecasts are not very useful, and a narrow confidence interval would make it too easy to reject their forecasts ex post. By giving only a point expectation, if right they can say "Look, we nailed that one", and if wrong, they can say "Yes, there was a high degree of uncertainty surrounding our forecast".

Also, similar to your last comments, I wonder why forecasts often feel they have to place a single bet?

For example, take a housing market prediction that says, "we'll probably see 5% gains next year". This is a safe bet, as it falls within historical trends, even though there may be any number of elements that make that prediction far less than 100% certain. It's difficult to say "housing will decline 15% next year" as, even though there may be a number of factors that create that possibility, timing is always a bitch. So why not say, there's a 33% chance of a 0-15% decline, a 33% chance of 0% gain, a 33% chance of 5% gain? (as an example, obviously).

I'm no sophisticate and crapped out on highschool math, but even I can deal with a world of uncertainties, risks and probabilities, and hedge bets accordingly. If I think stocks will increase 10% this year, and bonds 5%, do I go all in on stocks? No. Because that guess on the 10% gain is framed in my brain as, say, a 50% likelihood, along with a 20% chance of no gain, a 20% chance of a 10% loss and a 10% chance of greater than 10% gain.

This is why the "no one could have predicted it" meme about the current meltdown is so annoying. Many were commenting on risks to the economy. But few would go out on a limb and say "next year the financial economy will tank colossally" even though it was plain to see there were significant systemic risks there, because in the short term it is more likely that the status quo will prevail rather than the radical change. Until it isn't, of course, and by then it's too late to make predictions.

It's like you want to shake these forecasters and say, well, HOW likely is your prediction, and how likely are other outcomes. If there is 95% confidence that housing will increase 5% this year, along with 5% likelihood of 0% increase, well OK. If it is 51%, and there is a 20% probability of a 5% decrease, a 20% probability of a 10% decrease and a 9% probability of a greater than 10% decrease, that's substantially different, and should have an impact on my decision making.

Have we established that it's even possible to make accurate forecasts (i.e. ones that consistently do better than simple heuristic approaches past a reasonable level of statistical significance) with regard to macroeconomic factors? If not, then what do we even mean by a 'good' forecast - a lucky one?

Regarding private sector forecasts - there is a startling graph provided in the PBO's economic and fical update comparing past recessions deviation from peak real gdp with current average private sector forecasts. Bay Street is currently forecasting a downturn that is only slightly worse than the 2001 slowdown - seems very optimistic.

I've got the graph posted on my blog.

[edited to make the click-through link - SG]

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad