Bryan Caplan asks the question of why more academic economists don't use Bayesian methods. It's a question academic Bayesian economists ask ourselves pretty often.

One part of the answer is that many already are informal Bayesians. For example, the whole DSGE literature would be uninteresting (or even more uninteresting, depending on how you feel about it) if there were not a broad consensus about what the plausible range of certain key parameters is.

But the main answer surely involves hysteresis in the teaching of econometrics. Before 1985 or so, anyone who wanted to make use of Bayesian methods was limited to simple models (eg: linear normal regression) for which analytical expressions for certain key integrals are available. Anyone who wanted to apply Bayesian methods to more sophisticated models had to evaluate those integrals using existing numerical techniques; the 'curse of dimensionality' limited Bayesian econometricians to low-dimension problems.

Even if econometricians could be convinced about the soundness of Bayesian methodology (and it's pretty easy to convince them of that; the usual classical story of repeated, infinite samples makes little sense for the analysis of the finite non-experimental data sets that economists are invariably obliged to work with), they could quite sensibly make the point that classical methods were at least able to provide estimates for models more complex than the linear normal model. It didn't make much sense to spend valuable class time on a methodology that could be applied to a proper subset of models that could be estimated by classical methods. And when those students became teachers of econometrics, they would teach what they knew. And they didn't know anything about Bayesian methods.

So when the curse of dimensionality was lifted by the development of Monte Carlo techniques and computers that could run them quickly and cheaply, there were very, very few teachers of econometrics who realised that Bayesian methods could be now applied to a much wider class of models. There are many important cases (multinomial probit, stochastic volatility, and indeed many if not most latent variable models) where classical estimation is much more costly than Bayesian estimation.

Econometrics syllabuses are path-dependent: academic econometricians who didn't see Bayesian methods as students don't teach them to their students. But the original reason for marginalising Bayesian methodology was that its applicability was too limited for most practitioners - and this is no longer the case. Economists are familiar with the notion of hysteresis - the persistence of a phenomenon after its cause has been removed - and it would appear that the teaching of econometrics is another good example.

Interesting. Never thought of the

Ocular-metricsliterature (DSGE literature) that way.Funny how the profession gladly inserts Bayesian-updating/learning in a variety of models including DSGE models, but shuns Bayesian econometrics despite vocal efforts of high profile innovators such as Arnold Zellner.

Many have rationalized the preference for classical estimation techniques due to the apparent difficulty in 'objectively' choosing priors. (For finite samples, classical statisticians have re-sampling.)

Must admit that I have always assumed the following: the more complicated the regression technique, the more degrees of freedom presented to the researcher to make the data sing on key. How many papers are published where the key estimated parameters are insignificantly different from zero, or all the principal hypotheses are summarily rejected? (No matter how useful such a published estimation exercise might be to other researchers.)

On the one hand, the probabilistic statements generated by a Bayesian empirical approach are most attractive. On the other hand, some committed Bayesian empirical researchers have a horrible track record of lousy forecasts, example, fishery ecologists.

Perhaps some of the profession would be more convinced to use Bayesian estimation techniques if pointed to an improved forecasting record or successful policy applications where Bayesian techniques made a difference?

Posted by: westslope | November 15, 2009 at 12:35 PM

Interesting. Never thought of the

Ocular-metricsliterature (DSGE literature) that way.Funny how the profession gladly inserts Bayesian-updating/learning in a variety of models including DSGE models, but shuns Bayesian econometrics despite vocal efforts of high profile innovators such as Arnold Zellner.

Many have rationalized the preference for classical estimation techniques due to the apparent difficulty in 'objectively' choosing priors. (For finite samples, classical statisticians have re-sampling.)

Must admit that I have always assumed the following: the more complicated the regression technique, the more degrees of freedom presented to the researcher to make the data sing on key. How many papers are published where the key estimated parameters are insignificantly different from zero, or all the principal hypotheses are summarily rejected? (No matter how useful such a published estimation exercise might be to other researchers.)

On the one hand, the probabilistic statements generated by a Bayesian empirical approach are most attractive. On the other hand, some committed Bayesian empirical researchers have a horrible track record of lousy forecasts, example, fishery ecologists.

Perhaps some of the profession would be more convinced to use Bayesian estimation techniques if pointed to an improved forecasting record or successful policy applications where Bayesian techniques made a difference?

Posted by: westslope | November 15, 2009 at 12:35 PM

This is how I remember MA econometrics from over 30 years ago:

Robin Carter gave us a very thorough grounding in Bayesian vs Classical, the meaning of estimates, estimators, sampling distributions, etc. Then most classes were spent on matrix algebra showing whether or not certain estimators would be unbiased etc. in different cases, and how to fix the problem if they were. (Though he did give us the intuition as well, and I've remembered some of that, even if I've forgotten all the algebra.)

If econometrics classes became Bayesian, would that meaning replacing all those weeks of classes of matrix algebra with one class on "Here's how you do a Monte Carlo"? What would the prof do instead?

I'm speaking from ignorance, of course.

Posted by: Nick Rowe | November 15, 2009 at 01:06 PM

Econometric analysis is fine for a one stage game where the context is time invariant

Posted by: noname | November 15, 2009 at 01:40 PM

"Bayesian empirical researchers have a horrible track record of lousy forecasts, example, fishery ecologists."

Seems unreasonable to expect regression techniques to predict bifurcation in dynamic systems. Then again, it seems unreasonable to expect to model a non-linear dynamic system using DSGE. That's my no-real-training-jerk-commenting-on-blog take on 'what's wrong with macro'. Incidentally, bifurcation is also what scares the crap out of me about climate change; it's all good until it isn't and your living on Venus. But I'll own that it could just be that I have a hammer (engineering education) and thus every problem looks like a nail.

Posted by: Patrick | November 15, 2009 at 04:13 PM

My own no-real-training-jerk-commenting-on-blog take is that there has always been a strong streak in economics of striving to eliminate all elements of subjectivity in order to make economics a 'hard' subject like math or physics - and the introduction of a seemingly subjective element (the prior probabilities), via a Bayesian approach, may face resistance for this reason.

Posted by: Declan | November 16, 2009 at 01:26 AM

Good point Declan.

The rhetoric of pseudo-objectivity remains popular. Ultimately economics is a policy science, and indeed many economists argue scientific success based on policy achievements and failures. On an intuitive level, it makes solid sense; prior beliefs are important and should be explicitly identified. If I recall, American sociologists made the same point some 30, 40 years ago in the context of mostly qualitative analysis.

SG: Can you recommend a recent (and ideally accessible) practical guide to Bayesian estimation techniques written for non-Bayesian folks with classical statistics background? Something that cover issues like estimation techniques in the absence of well-defined priors.

Posted by: westslope | November 17, 2009 at 02:48 PM