« On the federal government's structural deficit | Main | Deflationary death-spirals and the social construction of monetary policy »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Interesting. Never thought of the Ocular-metrics literature (DSGE literature) that way.

Funny how the profession gladly inserts Bayesian-updating/learning in a variety of models including DSGE models, but shuns Bayesian econometrics despite vocal efforts of high profile innovators such as Arnold Zellner.

Many have rationalized the preference for classical estimation techniques due to the apparent difficulty in 'objectively' choosing priors. (For finite samples, classical statisticians have re-sampling.)

Must admit that I have always assumed the following: the more complicated the regression technique, the more degrees of freedom presented to the researcher to make the data sing on key. How many papers are published where the key estimated parameters are insignificantly different from zero, or all the principal hypotheses are summarily rejected? (No matter how useful such a published estimation exercise might be to other researchers.)

On the one hand, the probabilistic statements generated by a Bayesian empirical approach are most attractive. On the other hand, some committed Bayesian empirical researchers have a horrible track record of lousy forecasts, example, fishery ecologists.

Perhaps some of the profession would be more convinced to use Bayesian estimation techniques if pointed to an improved forecasting record or successful policy applications where Bayesian techniques made a difference?

Interesting. Never thought of the Ocular-metrics literature (DSGE literature) that way.

Funny how the profession gladly inserts Bayesian-updating/learning in a variety of models including DSGE models, but shuns Bayesian econometrics despite vocal efforts of high profile innovators such as Arnold Zellner.

Many have rationalized the preference for classical estimation techniques due to the apparent difficulty in 'objectively' choosing priors. (For finite samples, classical statisticians have re-sampling.)

Must admit that I have always assumed the following: the more complicated the regression technique, the more degrees of freedom presented to the researcher to make the data sing on key. How many papers are published where the key estimated parameters are insignificantly different from zero, or all the principal hypotheses are summarily rejected? (No matter how useful such a published estimation exercise might be to other researchers.)

On the one hand, the probabilistic statements generated by a Bayesian empirical approach are most attractive. On the other hand, some committed Bayesian empirical researchers have a horrible track record of lousy forecasts, example, fishery ecologists.

Perhaps some of the profession would be more convinced to use Bayesian estimation techniques if pointed to an improved forecasting record or successful policy applications where Bayesian techniques made a difference?

This is how I remember MA econometrics from over 30 years ago:

Robin Carter gave us a very thorough grounding in Bayesian vs Classical, the meaning of estimates, estimators, sampling distributions, etc. Then most classes were spent on matrix algebra showing whether or not certain estimators would be unbiased etc. in different cases, and how to fix the problem if they were. (Though he did give us the intuition as well, and I've remembered some of that, even if I've forgotten all the algebra.)

If econometrics classes became Bayesian, would that meaning replacing all those weeks of classes of matrix algebra with one class on "Here's how you do a Monte Carlo"? What would the prof do instead?

I'm speaking from ignorance, of course.

Econometric analysis is fine for a one stage game where the context is time invariant

"Bayesian empirical researchers have a horrible track record of lousy forecasts, example, fishery ecologists."

Seems unreasonable to expect regression techniques to predict bifurcation in dynamic systems. Then again, it seems unreasonable to expect to model a non-linear dynamic system using DSGE. That's my no-real-training-jerk-commenting-on-blog take on 'what's wrong with macro'. Incidentally, bifurcation is also what scares the crap out of me about climate change; it's all good until it isn't and your living on Venus. But I'll own that it could just be that I have a hammer (engineering education) and thus every problem looks like a nail.

My own no-real-training-jerk-commenting-on-blog take is that there has always been a strong streak in economics of striving to eliminate all elements of subjectivity in order to make economics a 'hard' subject like math or physics - and the introduction of a seemingly subjective element (the prior probabilities), via a Bayesian approach, may face resistance for this reason.

Good point Declan.

The rhetoric of pseudo-objectivity remains popular. Ultimately economics is a policy science, and indeed many economists argue scientific success based on policy achievements and failures. On an intuitive level, it makes solid sense; prior beliefs are important and should be explicitly identified. If I recall, American sociologists made the same point some 30, 40 years ago in the context of mostly qualitative analysis.

SG: Can you recommend a recent (and ideally accessible) practical guide to Bayesian estimation techniques written for non-Bayesian folks with classical statistics background? Something that cover issues like estimation techniques in the absence of well-defined priors.

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad