[I]f I were making forecasts for a financial group or a private firm, one would expect I would lose my job if I made consistently bad forecasts.
There was a time when the Department of Finance would make its own projections when it brought down a budget, but as its credibility deteriorated during the 1990's (for some reason, deficit forecasts always turned out to be wildly over-optimistic), it decided to use private sector forecasts. This has had the effect of removing one element of controversy during budget time: the government can claim to be using the best information available. After all, if private sector forecasters are paid to generate good forecasts, then that's what they'll produce, right?
Then again, maybe not. There's a substantial literature on this topic; a good a place as any to start is this paper by Allan Gregory and James Yetman (ungated version available here). Here's a paragraph from the literature survey section:
[A] number of papers show that competing professional forecasters provide a somewhat homogeneous product. For example, McNees, 1979 and McNees & Ries, 1983 argue that the differences between macroeconomic forecasts are small, while Batchelor (1990) demonstrates that the differences are indistinguishable from random noise. Spiro, 1989 and Zarnowitz, 1984 also show that forecasts are clustered very closely together. As has been pointed out by several authors, forecasting is a competitive industry with few barriers to entry; so inferior forecasters are quickly driven from business. Batchelor and Dua (1990) do find that some forecasters seek to distinguish their forecasts from each other, taking a position of optimist or pessimist within a panel of forecasters, although the magnitude of this effect is very small relative to noise present in the forecasts. Lamont (2002) finds that as forecasters become older and more established, they produce more radical forecasts that are generally less accurate. Campbell and Murphy (1996) and others also point out that forecasts are so similar that often the range of forecasts does not include the actual data when published. This is also true in the panel used here. While Rich, R.W, Raymond, J.E and Butler, J.S, 1992. The relationship between forecast dispersion and forecast uncertainty: evidence from a survey data—Arch model. Journal of Applied Econometrics 7, pp. 131–148. Full Text via CrossRefRich, Raymond and Butler (1992) find that forecast dispersion is generally correlated with uncertainty in the actual data, it is imperfectly so. The range of forecasts underestimates the degree of uncertainty facing forecasters, sometimes substantially. It follows that the degree of consensus, or agreement, found in panels of macroeconomic forecasts does not necessarily correspond to the level of information available to forecasters, or indeed the amount of uncertainty facing those forecasters.
It looks to me as though private sector forecasters face the same incentives as do fund managers: it's relative, not absolute, performance that matters. The worst case scenario for them is not to make a bad forecast; it's to make a forecast that is significantly worse than their competitors. If everyone gets it wrong, then there's no penalty: 'What can I say? We all got sideswiped by the recession.' There are gains to being the only one who gets it right, but only the best-established (and therefore the least risk-averse) forecasters will gamble on being the outlier.
Private sector forecasters may have a greater incentive to provide a forecast that matches those of their competitors than to provide one that is accurate.