« Monetary policy as asset prices | Main | What standard monetary theory says about the relation between nominal interest rates and inflation »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Of course the idea that because i = p + m - g in equilibrium, you can just set i to zero, assume (m - g) is constant, and presto, get p = the inverse of that constant, without considering either (i) where you started from, or (ii) how you get there from here, is just nuts.

And the fact that Steve, and now you(?), and Jesus Fernandez-Villaverde on Mark Thoma's blog, agree with him, definitely confirms it's not random. There real big systemic problems with the economics that some people are learning (or not learning). And that Steve should be surprised that others find this controversial, is really surprising. Didn't he know what everyone else thinks, even if he does disagree

Academic economists get rewarded for publishing papers. You don't get rewarded for understanding how the economy works, or how your predecessors like Patinkin and Leijonhufuvud thought it worked. You get rewarded for novelty, and especially for mathematical rigor. Economic modeling in this world is just an exercise in working out the logical implications of a set of assumptions. It doesn't really matter if your theory makes correct predictions if it's elegant enough. This is the arrogance of modern macroeconomics, a discipline that never admits in public how little it really knows.

We've known for a long time how to evaluate economic theories. You start by fitting a good atheoretic statistical model, like Litterman's Bayesian Vector Autoregression (BVAR) to the variables of interest. The economic theory you want to evaluate implies some restrictions on the BVAR's coefficients and residual covariances should hold. Estimating a BVAR is an optimization problem, and you can do the optimizing subject to the restrictions implied by the theory. This gives you two estimated models, and you can do likelihood ratio tests to see if the data reject the theoretical restrictions or not. If they do, you're done: the theory is obviously wrong. But if the data don't reject the theory, you aren't done yet. You should compare the out-of-sample forecasting ability of the theory-restricted model with that of the atheoretic BVAR. If your theory doesn't lead to substantial improvements in the forecast, then it really isn't very informative about the way the world works.

I have not been active in academic economics for many years, but my impression from afar is that very few models pass the first hurdle of not being obviously wrong. The ones that do are of the New Keynesian type. This is not really very surprising, because the New Keynesians have always paid a lot of attention to empirical data in coming up with their theories. The RBC crowd starts more from first principles, and there models are violently rejected by the data.

However, I also don't believe that even the New Keynesian models forecast out of sample much better than a BVAR does. In that sense, none of really know much about how the economy really works.

But there are a few empirical regularities, a.k.a. stylized facts, that we observe even if our attempts at modeling them are lacking. One is that Friedman is correct: inflation is always and everywhere a monetary phenomena. We know that central banks can create inflation or vanquish it. We don't need a theory to explain this because we've seen them do it, repeatedly.

What we need in monetary policy are simple rules that are robust to uncertainty about how the economy actually works. That's always been the appeal of Taylor rules, and it applies even more to the Sumnerian policy of targeting a path for expected NGDP.

God! I’m up-in-the-air in a flying saucer with giant aliens who can quantify everything and qualify nothing. They look human except they’ve got odd eyes – the right one is huge and alight like a bright light; the left one is tiny – and zippered shut. I wonder what would happen if they unzipped it. When they did, though myopic, they got a "picture" of me, an earthling life form.

“Little”+economics is basically grounded by selfish interests and utility of big business wanting to get ever bigger; “Large”+economics is mainly up-in-the-air about selfish interests of the biggest government wanting to get ever bigger. Economic professors’ train (not an education to me when basic assumptions are faulty) students to work for them - and, not people who have healthy self-interest (ego + some empathy) in the marketplace.

Important comment by Jeff Hallman above. Confess I had never heard of a BVAR, but I get the gist. Google tells me Jeff is a serious econometrician (that I needed Google says more about me than Jeff).

While always rough around the edges and at times wrong to the core, it was once great to have economic professors try to knock me off my high horse that I keep rocking side to side. Indeed, after the debate with Gintis regarding the principle of “preferences” one professor asked: “What is economic’s biggest problem?”

I thought about it, then it struck me out of the blue: paper economists, business & government, on financial markets trying to ski up the slope. When only looking down on paper and up at the ideal of exponential growth, they're oblivious to the bubbles their clients are blowing that always burst. How to tell them? And said: A pond lily doubles its surface daily and on the 30th day covers the pond. On what day did it cover half the pond?

Thanks for noticing, Nick. I wouldn't refer to myself as a serious econometrician these days, mostly I do programming. But I used to be.

On the BVAR:

It is well known in economic forecasting circles that simple models with few parameters tend to forecast better out of sample than more elaborate models with many parameters. Suppose you simulate a model like:

y(t) = a + b1*x1(t) + b2*x2(t) + ... + bk*xk(t) + e(t) (1)

where e(t) and xi(t) are independent and identically distributed, and with 1 > b1 > b2 > .... bk > 0. If the number of observations you have is large relative to k, then estimating a model that includes all k of the xi(t) will give you good out of sample forecasts. But if you don't have very many observations, you'll find that dropping some of the xi variables with small coefficients from the regression improves the model's forecasting ability. The variance of the estimated coefficients is smaller if you estimate fewer of them, and for prediction purposes, a precisely-estimated incorrect model is often better than an imprecisely-estimated correct model.

A typical conventional vector autoregression (VAR) in macro has 4 lags of six variables. Throw in a constant, and this means each of the six equations has 25 parameters to estimate. You also have estimate the 21 parameters of the symmetric 6 by 6 covariance matrix. You typically only have 20 or 25 years of quarterly data to work with, which means you're estimating 171 parameters with only 480 to 600 observations. That doesn't sound too bad until you realize that there's likely to be considerable collinearity amongst your six variables. You are going to end up with estimated standard errors on your coefficients that are so large as to render most of them meaningless, and the out of sample forecasts will be quite poor.

Litterman's BVAR is a form of ridge regression, an old technique used by statisticians to reduce the effective number of parameters estimated by biasing the estimates in a particular direction. It is one of a number of so-called "shrinkage" estimators. In traditional ridge regression, the coefficients are shrunk (biased) towards zero. The Litterman prior biases the coefficients towards the "six independent random walks" model. However, you can use the same technique to bias coefficients in some other direction. As a grad student many years ago, I worked for a while on shrinking a VAR towards a cointegration prior, but I never really finished it. One day somebody ought to pursue it.

At any rate, the fact that BVAR's with the Litterman prior do about as well at forecasting most macroeconomic series as the big econometric models that used to be popular is one reason those big models have fallen out of favor. What's important in the context of this discussion is that they contain zero economic theory content, and yet they perform about as well as models with lots of built-in economic theories. It's true that BVAR's are lousy at predicting things like turning points or evaluating the effect of policy changes. But then again, that's also empirically true of the theory-based models.

Jeff: If I understand this correctly, then if there were zero collinearity between the 6 X variables (just suppose, I know it's almost impossible) Ridge would not help the forecasts (or would it make them worse?), and BVAR would do the same?

But in general, given there almost always is some collinearity, BVAR is the best we (currently) know how to forecast, atheoretically. So it's the benchmark against which we test theoretical restrictions.

And we are talking about conditional forecasts, I presume? Future Y, condition on future X.

Nick: The usefulness of a shrinkage estimator depends on how much information you have to work with. Collinearity reduces the amount of information in the data because some columns of X are nearly the same as linear combinations of other columns. But you may have insufficient information for other reasons, like not having enough observations. If you want to estimate 25 parameters but only have 24 observations, you can't use the OLS estimator

b = inv(X'X)X'y

because X'X is not invertible. But you can compute the ridge estimator

r = inv(X'X + dI)X'y

if d != 0. Note that the OLS estimator is just the special case of d = 0.

The covariance matrix of the ridge estimator is (sigma^2)*inv(X'X + dI), where (sigma^2) is the variance of the error term. The diagonal elements are the variances of the individual coefficients. If X'X is singular, or nearly so, then small d yields coefficient estimates with large variances. If you use a larger d, you reduce the variance of the estimated coefficients by shoving them closer to zero.

I wouldn't say that BVAR is the best we know how to do, because, in general, you can only make statements like when you know what process generated the data you're working with. In macro, we don't know that. Rather, I'd say that Litterman's BVAR has been tried with a pretty fair number of macroeconomic data sets and worked pretty well with a lot of them. That may seem kind of weaselly, but in fact it's more than you can say of most techniques.

As for your last question, no. VARs and BVARs are usually used to forecast everything at once.

Must be hard to take someone serious who rides in on a high horse (I look like a Don Quixote chasing giant “windmills”, eh :) and come onto the vast field of economics, ride right to the center of your profession and stake a claim. The stake would be right through the heart, except standard economics has no heart – and works with half a mind, the half centered by ego (I-ness presupposes a separation between self and object I surmise); mainly using mechanical models and so much math centred by 0: all things a robot can do or potentially do. You’re not robots but you do “zip” the left eye to be professional, objective, work around 0 - and no doubt un-zip in your personal life.

So, what evidence can I bring forth to support that ego centers objectivity. Recall the question here earlier about “sensing” and “intuition”; “feeling” and “thinking”. This is a “whole” used in Analytical Psychology: Jung’s typology relative to mind functioning. Thinking and Feeling are formally absolute opposites (assumes when you’re thinking; there’s no feeling, when your sensing its impossible to intuit - and vice versa for the opposing pairs). Jung put Ego (self-interest) right in the middle of his mind functions. When psychologists in England asked me to do action research with them there, I revised this wonderful typology and put Ego as a center between Sensing + Thinking – and, what was put between Feeling + Intuition? Yup. Empathy. Worked like a charm, though it's actually what the ancient's called a philosopher's Stone, useful for grounding a philosophy in experience.

The same revision was done in the school of Individual Psychology too: what I’m doing for you in mainstream economics (and before in forestry, agricultural and ecological economics), in education and philosophy: the latter being a resurrection of the 4Elements but not literally: as a root metaphor to develop analogues to do heuristics and develop rules of thumb for action research in the fields mentioned. And, they are all integrated using the root metaphor. Notice Fire + Water is to Thinking + Feeling as Air + Earth is to Intuition + Sensing. Individual psychologists use 4Attitude Types: Controlling; Getting; Pleasing; Avoiding. I simple rearranged them as opposites: Pleasing and Getting; Controlling and Avoiding with the elemental root metaphor. The analytical and individual psychologists had never seen anything like it – and loved it. Alfred Adler and Jung, who both abandoned Freud early, always liked and respected each others work though they went in different directions developing their schools – and now some of their disciples can and do work together with an integrated theory for common practice.

I’ve vast experience and am experiential resolving problems, which is what action research is mostly about. But my simple math is the complete opposite of your sophisticated maths. 0 has no existence in experience – it’s all in your mind. Look!!! When have you ever seen nothing, really; no, you see something. I see one apple, two pigs but not nothing; and, no such thing in reality as a negative number of something. An apple can be absent but you can’t have anything less than 0 like –2 apples in reality, other than in paper economics, as I see it. But like I said that’s in our minds, in imaginative mathematical intuition and an incredible fiction to do economic science (though watch out for exponentials; they'll explode in your face). Still, wipe out all the “paper” and the real economy still stands, albeit absolutely still (for one moment at least to reboot the system) with the Solon solution.

Like the ancients (who didn’t know 0) my math centres on One and work with integers up to four parts to form wholes. Four-part wholes were the best, according to Plato. And I simply work with fractions….halves (½ self-interest + ½ utility) or thirds in analysis to establish rules of thumb. Look at your hand: touch forefinger and thumb. It forms a whole, a circle with three independent fingers. The best wholes are four-folded (i.e. 4 Elements); and the best movement is three “fingers” movement (working together) with one finger still, forming a whole with the thumb.

I’ll stop here and apologize for the length of this post. Still, on topic asserting none should be forced to take an intro economics course if assumptions are off base. And, the more I look at the problem, the more of that truth comes to light. So, I will do at least one more post to clarify some ambiguity: especially, the statement “quantify everything and qualify nothing”; and Gresham’s Law where the bad of selfishness drives out healthy self-interest creating narcissistic culture. (read Twenge & Campbell, 2009: The Narcissism Epidemic: Living in the Age of Entitlement.

Years ago I was in an Economics graduate program at a major US university (I never did anything with Economics), I one of the things I learned was that many individuals knew nothing of Economics beyond the math. They could derive, integrate and solve rings around me but exhibited little understanding about how the math relates to real life. I tend to think the people at U of Chicago are very weak at real world application as exhibited by the ignorant statements of several of their faculty and the above noted Fed President.

Has Greg Mankiw been reading this blog:

http://www.nytimes.com/2010/09/05/business/economy/05view.html?_r=1&scp=1&sq=a+course+load+for+the+game+of+life&st=cse

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad