« Teaching comparative advantage: barter vs money (bleg) | Main | Shifting Populations, Shifting Economies »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

"We did in fact get a recession and not a boom when central banks hit the ZLB"

How is the ZLB on reserves relevant if most of the money supply isnt reserves? Broader money lending rates are at nowhere near zero, so therefore it can be said we are not at the zlb.

Mike: if currency pays 0% nominal interest, that puts a floor on other nominal interest rates (except for the hassle of safety deposit boxes etc.). Now true, not all nominal interest rates are at or near 0%, especially longer term rates. But the NK model assumes the central bank sets the very short term rate on very safe loans.

I don't get this hankering for determinacy. Show me a model in which the price level goes to some definite level after billions of years have elapsed and I'll show you a model which is no bloody use to anyone.

AFAICR, one reason Taylor-style interest-rate rules became popular was because they seemed to describe actual central bank behaviour pretty accurately. Money-growth rules didn't, with the exception of the Bundesbank now and then. If such rules imply that the price-level in 2063 is indeterminate then so be it.

As for putting money in the NK model, it's already there. The prices are expressed in terms of money and the loans are denominated in money. In Gali's version at least, you can even put money in a (separable) utility function and nothing much changes.

I understand what you say but I want to add something more that I believe the models are missing. How is the rate on reserves going to be relevant to broader economic measure like Y or P if most transactions in the economy don't utilize currency (base) for transactions? The interest rate on broader money must be relevant not just reserves.

Kevin: "I don't get this hankering for determinacy. Show me a model in which the price level goes to some definite level after billions of years have elapsed and I'll show you a model which is no bloody use to anyone."

I've shown you a model in which the level of output and rate of inflation *right now* can be anything whatever. I've shown you a model which is no bloody use to anyone.

And even if you simply put money in a separable utility function, at least you've got a Pigou effect so P* is defined (as long as they target M and not a nominal interest rate). [Edit: so your model can actually say what today's output and inflation will be.] That's a helluva lot better than nothing.

I think the claim that "New Keynesian macroeconomists have just assumed [the problem of indeterminacy] away, without us realising they were doing so" is wrong. There is a sizable literature on indeterminacy in macro models, and in the NK model in particular. A major reason the Taylor rule became popular in monetary theory is that, in addition to being a reasonably good empirical description of the Fed's behavior, the Taylor principle ensures the determinacy of (locally bounded) equilibrium in the NK model.

I agree with the general point that many macroeconomists want to interpret NK models in terms of old-fashioned hydraulic Keynesian intuition and that this is wrong. In dynamic rational expectations models, the name of the game is determining which entire sequences of history-contingent plans are consistent with equilibrium and which are not. This kind of reasoning is fundamentally different from something like 'if inflation rises, let's raise real interest rates to slow down the economy.'

Both your critique and Cochrane's critique of NK models are best understood in terms of equilibrium selection.

I am actually not even sure what your concern really is. Is it that the equilibrium level of output is indeterminate? Or is it that the model always converges back to the steady state in the limit? If it's the former, then that is just the same as Cochrane's critique; I will say more about it below. If it's the latter, well, what do you suggest as an alternative? Explosive dynamics?

You say that NK modelers "just [assume] that Y(t) asymptotes towards Y*(t) as time goes to infinity, even though there is nothing in the model that says it should." But they do not 'just' assume this. The outcome is the result of judgments about the kinds of equilibria that ought to be admissible. Basically, they make the following sequence of modeling choices:

#1. The US economy has been remarkably stable over the past 150 years, so we will rule out equilibria in which some variables explode in the limit.
#2. We think the Fed has a long-run inflation target.
#3. We think the Fed commits to sets the nominal interest rate according to a Taylor rule: the rate is increased when inflation is above the long-run target. We think the Fed moves interest rate more than one-for-one in response to changes in the inflation gap.
#4. As monetary economists, we don't want to get into the business of modeling the political process. We therefore assume a passive fiscal policy.

These choices imply that in the long run, output will tend to converge toward its steady-state value under the Fed's target inflation rate. (This will not be equal to the flexible-price steady-state output level unless the inflation target is zero. For this reason, most NK models imply that the optimal long-run inflation target is close to zero. In practice, it is indeed common to assume that the long-run target is zero. But it is not necessary to assume that.)

So if you think that the tendency of output to converge back to the steady state is a problem, I guess you think the real-world economy is more appropriately described by an equilibrium in which variables diverge to infinity in the limit? (i.e. You want to drop modeling choice #1?)

Cochrane's critique is very deep, I think. Suppose you accept that explosive equilibria should be ruled out. The model still has multiple rational expectations equilibria, in general. The question then becomes: should we allow the Fed to select a particular equilibrium by committing to a policy that makes all the other potential equilibria explosive (and hence rules them out)? This is how the Taylor principle achieves determinacy, and it is deeply problematic in at least two ways. First, it appears to require the Fed to commit to out-of-equilibrium policy actions that are not credible. The Fed has to be committed to blow up the economy unless the private sector follows the unique equilibrium the Fed wants. Why should anyone believe that the Fed would do this? Second, the transversality condition for real debt is an optimality condition for the household, and we typically do not allow the government to threaten to violate household optimality conditions out of equilibrium. If we allow this, why not just allow the government to rule out undesirable equilibria by threatening to, say, introduce an arbitrage opportunity (which cannot hold in equilibrium) if any of those undesirable equilibrium paths occur? That would be just as legitimate.

So my reading of Cochrane is that he regards the standard NK equilibrium selection criteria not merely as arbitrary, but indeed as fundamentally non-credible and illogical. I think there is something to that critique, and it's a problem. I don't think that merely introducing money explicitly in the model will solve the problem; there are plenty of NK models with money and they often feature indeterminacy of equilibrium, or achieve determinacy by means of policy specifications that have the same credibility problem described above.

Learnability might offer a limited solution. It is known that in simple NK models, an 'expectations-based' monetary policy rule that satisfies the Taylor principle achieves equilibrium determinacy, is learnable, and is the optimal credible monetary policy after all histories and for all possible values of private sector expectations. But results like that tend to be very sensitive to small changes in the model.

We could also try to grapple with expectations more directly, as Roger Farmer has been advocating. This would be a tough pill for many macroeconomists to swallow. One of the most intoxicating things about rational expectations was that it promised to make free parameters describing agents' expectations disappear from the model. Models with multiple rational expectations equilibria (i.e. most macro models) do not fulfill this promise, since picking an equilibrium amounts to picking particular expectations. Our best bet may be to impose some parametric structure on expectations and take the model to the data to select equilibria.

You can also get around some of these problems if you are willing to buy into the fiscal theory of the price level. See Cochrane's 2011 JPE paper for more on all this this.

Anyway, these are obviously big, important issues in macroeconomic theory.

Nick, I'm still not sure where you get the output indeterminacy issue. If I solve the new keyensian model under a very persistent but trend stationnary aggregate demand shock (take an AR(1) for the sake of argument with an autoregressive coefficient 0.999), the solution will be stationnary: equivalently the output gap will eventually converge to zero. Hence the long run output gap is zero. Now suppose there is a nonstationnary/permanent AD shock (e.g a long run tightening of access to credit that increases households' precautionary saving for the foreseeable future). This is a common experiment when studying the steady state effects of changes in credit constraints or the amount of uninsured unemployment risk for example. Now I haven't solve this case in the sticky price model, but intuitively after a transition phase the economy settles to a new long run equilibrium. Now in that new long run equilibrium can you have a permanently negative output gap? Not if people in the economy expect inflation to converge to the inflation target: in that case the new keynesian optimal pricing equation (the new keynesian phillips curve) says that when inflation=target inflation the output gap must be zero. Intuitively, once we've converged to a new long run equilibrium price rigidity does not matter because optimal flexible price matches the price with no adjustment (strictly speaking this assumes an approximately zero inflation target, or that if long run inflation is significantly above zero, then even sticky price firms frequently do a mechanical reindexation of their price to long run inflation). So I think if Michael Woodford could respond, he'd say he sets the long run output gap to zero because he only looks at equilibria where in the long run people think inflation converges to some target level (e.g the bank of canada's inflation target).

Nick: my comment about Cochrane apply only to the thread on comparative advantage. He may be on to something here.

Alexander: thanks for your detailed and well-informed comment. I'm only going to respond to part of it now.

I didn't really discuss this in the post, but I read what John Cochrane had to say (in his linked paper) about the ability of the Taylor Principle (I'm going to go Canadian nationalist and insist it's really the Howitt/Taylor Principle) to select an equilibrium, and I think he's right. The H/TP is like the central bank saying "If you don't all follow the equilibrium path I want, I will act in some crazy fashion and make sure no equilibrium path exists!" (I think that's what John Cochrane is saying.) I just don't find it plausible that a threat like that, even if 100% credible, could work at the level of atomistic individual agents to push them back onto the equilibrium path and thus prevent them straying off it in the first place.

Plus: If the H/TP *can* keep them on one equilibrium path, then it would work just as well to keep them on John Cochrane's equilibrium path as on the "standard" equilibrium path. (That's what I was alluding to when I said "Both solutions would be equally stable or unstable in the sense of staying or not staying on that path if the central bank threatened to respond if they strayed from the equilibrium path.")

Plus, IIRC, when Howitt (and Taylor?) introduced the H/TP, it was in the context of a model with a more standard Old Keynesian IS curve, (and a Phillips Curve where inflation couldn't just jump like it can in the Calvo model?), where raising the nominal interest rate more than inflation deviations from target really would increase real interest rates and reduce the *level* of output and reduce the level of inflation. So I *think* that when we use the H/TP to resolve the indeterminacy we are again implicitly relying on Old Keynesian intuitions. In an NK model, there is nothing that prevents inflation jumping in response to a threat to raise nominal interest rates.

"I am actually not even sure what your concern really is. Is it that the equilibrium level of output is indeterminate? Or is it that the model always converges back to the steady state in the limit? If it's the former, then that is just the same as Cochrane's critique;..."

My concern is that even if we assume the central bank can set a real interest rate (assume the overnight rate is fully indexed to inflation, for example), and even if the central bank always sets exactly the right real interest rate, and will do so forever, the current level of Y is indeterminate unless we just assume Y always converges to Y*. (Forget the Phillips Curve, and just look at the Euler IS equation.) And that assumption cannot be justified. This is different from John Cochrane's critique, because he does just assume Y converges to Y* in the limit.

"But they do not 'just' assume this. The outcome is the result of judgments about the kinds of equilibria that ought to be admissible. Basically, they make the following sequence of modeling choices:..."

This is the way I read it: monetary economies in the real world seem to eventually converge back towards something that look like Y* without exploding or imploding, (unless central banks do something really Zimbabwean stupid). But NK models (unlike monetarist or some Old Keynesian models) give us no reason whatsoever why this should be true. So it is not OK just to impose this as an additional assumption on the model, to select an equilibrium. This is something the model should be explaining. A satisfactory model should be explaining why monetary economies normally do not explode or implode.

I will reflect more on your comment. Thanks again.

daniels: I disagree. If you are getting those results, I think you are simply picking an equilibrium path that converges, and ignoring other equilibrium paths. I am saying that even if there are no intrinsically relevant shocks, and even if the central bank sets the real interest rate exactly right always, there is nothing to prevent a sunspot casing y to be permanently (say) equal to 0.5Y*. 50% unemployment permanently, even if the central bank does everything right and there are no shocks. See my post i linked to above.

Another great post Nick, I'm glad you followed up on Cochrane's work.

In studying DSGE models (Gali, Woodford, Wickens), it's puzzled me that GE always seems to be assumed or even axiomatic, as if the the doubts of Arrow, Hahn, Radner etc about existence, stability, uniqueness etc could be safely ignored. So the model seems to be about the time taken for the economy to get back to its rightful trend path if it weren't being prevented by 'frictions'. So perhaps it is more than an assumption implicit in the NK model it is a implicit proof of GE. But I've never seen it explicitly stated as such. What am I missing?

If you eliminate equilibria that are explosive and you allow for firms to endogenously choose some mechanical indexation in response to long run inflation significantly different from zero then I don't see what there is to disagree about. The New Keynesian Phillips curve says in the long run the output gap is zero. See any paper with a New Keynesian Phillips curve where sticky price firms can still index to inflation. Now, if there is no indexation by sticky price firms, the New Keynesian Phillips curve actually features a long run trade-off between output and inflation because the coefficient on expected inflation is below 1 (a position which I think Tobin was defending against Friedman's long run neutrality position going back to the initial debates about expectations in the Phillips curve). Again, just taking the Phillips curve at steady state, you can sustain a permanently positive(negative) output gap if you have inflation(deflation). Maybe that's what you're referring to? For sure, you need to use your phillips curve as well to close the model, can't just look at the IS curve and monetary policy. But the slope of the tradeoff is quite steep for typical parameter values, so you'd need quite high inflation (deflation) to get a permanent boom (bust). But for typical deflation levels like the one observed in Japan, you wouldn't be able to explain a big negative output gap this way.
HJC: the initial stability debates have been ignored because the discussion has moved on to other setups for proving convergence of various trading arrangements with market power and other frictions to perfect competition as trading frictions vanished (look up the litterature on search and matching frictions), or because once firms have market power like in new keynesian models the relevant debates are more about convergence to and stability of Nash equilibrium.

daniels: Thanks very much, I'll take a look. (If there is any particular paper/book you'd recommend, that would be appreciated.)

Nick, I can't see that it's any great help to know that P* is defined as long as they target M, since we know that in reality they don't do that. It's like having a model of what would happen if we used chariots for transport.

Alexander obviously knows this stuff better than I do. His comment tends to confirm my impression that the problems that are bothering you are problems with RE models generally. Why pick on NK particularly?

I am inclined to believe 4 is true but not necessary and that with 6 would come sufficiency, a matter of incompleteness. Indeterminate in reality without the response that 6 would recommend.

It's 4. The aversion to multiple equilibria models is some kind of a modern macro fetish. They're seen as "not having predictive power". But if the real world has multiple equilibria then it's silly to force the models to have only one. And multiple equilibria models are or at least used to be a staple of development economics and at least some parts of growth theory. And even international macro, at least when it comes to exchange rate crises. And in those areas no one rejects those models because the lack predictive power - it's seen as a feature, not a bug. So in terms of NKMs... it does look like essentially a "cultural" hang up.

a question that will probably just reveal my lack of understanding: if you have a model in which some variable is indeterminate, does assuming some initial condition resolve the problem?

Suppose the price level is indeterminate in your model, which I currently understand as meaning that your model consists of a system of equations which might have unique solutions for various quantities but has an infinity of solutions for the price level. But if you were to say, OK, but for whatever reason we find ourselves with a particular price level, would that be sufficient to pin down the price level from thereon out?

I have always been puzzled by this. The price level exists - it is out there. It might very well be the case that we could equally have (an infinity of) alternative price levels - in theory any price level would be consistent with everything else we observe today (i.e. output, inflation expectations, interest rates etc.) but for whatever reason we have the price level we have. Why should I worry if my model also has the feature that any price level would be consistent with everything else we observe today?

daniels: "Maybe that's what you're referring to?"

No. Like you, and (almost?) everyone else, I think of that minor non-super-neutrality in the Calvo model as just a minor annoying glitch that's some sort of artefact of how we set the model up, and I ignored it. Assume beta=1, assume crude indexation as you suggest, or assume an economy that grows at roughly the equilibrium interest rate (I think that works too), or assume whatever, just to get it out of the way.

Take that Calvo Phillips Curve (with that annoying glitch assumed away), or any vaguely sensible Phillips Curve, and add an **Old Keynesian IS curve**, with the central bank setting r (or nominal i) and you have a model that makes sense, and which eliminates the indeterminacy of the level of Y that I was complaining about. There is no *automatic* tendency to Y* ("full employment") in that model, but if the central bank sets r correctly, it will go to Y* and stay there. And if the central bank sets r incorrectly for too long, you eventually get either explosive inflation (if r is too low) or explosive deflation (if r is too high). Or add an Old Keynesian ISLM model, and assume the central bank sets M, and you have a model that makes sense, and which will automatically go to Y* eventually (unless the central bank does something really stupid with M(t)). In both those cases you can solve for Y(t) if you know what the shocks are and what the central bank is doing. But with the New Keynesian IS curve you cannot solve for Y(t) unless you assume that Y(t) eventually converges on Y*. And there is no reason to assume it will.

Or, put it this way. You say "The New Keynesian Phillips curve says in the long run the output gap is zero." I say No it doesn't. The NK Phillips Curve says that *if* the output gap is not zero in the long run (or on average) then inflation must explode (or implode). For example, if you take the Old Keynesian IS curve, and assume the central bank always sets r too high, the NK Phillips Curve does *not* say that the economy will go to Y* despite this. Instead it tells you the economy will always have Y < Y* and there will be explosive deflation.

Luis enrique: "a question that will probably just reveal my lack of understanding: if you have a model in which some variable is indeterminate, does assuming some initial condition resolve the problem?"

In this case, yes. And that's what John Cochrane did, when he assumed initial Y=Y* immediately after the shock hits. Or we can assume some terminal value for Y, like Y=Y* immediately after the shock ends, which is what the standard solution does. But in both cases those are simply assumptions, with nothing to say that one assumption is better than another. Because the model says that Y can jump up or down instantly, if there's news. In this model, P (the price level) cannot jump up or down, so P is pinned down by history. But p (the inflation rate) can jump up or down, so p is not pinned down by history.

Alexander: "We could also try to grapple with expectations more directly, as Roger Farmer has been advocating."

This is how I interpret Roger's work. He takes a fairly standard model, deletes one equation (the labour market Nash bargaining solution for the real wage), so he now has a model that is one equation short of a solution, and then he adds an extra equation back in for the stock market, and calls it a "belief function". I am very leery of that approach. It's the labour market that has the indeterminacy, so why not replace the Nash Bargaining solution for W/P with something else that tells you what is going on in the labour market. Nominal wages being determined by custom, or whatever, that creates a Schelling focal point, sounds more sensible to me. Why look to the stock market for an extra equation? Why not the market for peanuts, or anything?

Ever since Adam Smith, we have puzzled over market economies as a self-ordering system. Old Keynesian macroeconomists quite rightly asked: what is it, if anything, that would lead the economy to full employment? To simply assume, as New Keynesians do, that the economy eventually converges on full employment and finite inflation, even though there is nothing in their model to say it should, is to throw away 230 years of economics. It would be like biologists throwing away Darwin, and saying "somebody up there must just like beetles, and bunny rabbits, and us!"

I am quite happy to let the data determine the parameter values for us. I am not happy to let the data choose the equilibrium for us (unless we really do believe it's a pure coordination game like which side of the road we drive on, or whether or not there's a bank run, and the data say that both equilibria do in fact occur).

Nick said: "The NK Phillips Curve says that *if* the output gap is not zero in the long run (or on average) then inflation must explode (or implode)."

Okay, I now understand what you're claiming and I agree that it is correct. The Fed's choice of the long-run inflation target pins down the long-run steady state of the model, and the (Howitt-)Taylor principle together with the arbitrary decision to rule out unstable inflation paths pins down a unique equilibrium solution that converges back to that steady state in the limit.

This is what John Cochrane's 2011 JPE paper was about. He argued that since all of the economy's optimality conditions are satisfied along the explosive inflation paths, there is no economic rationale for excluding those paths as equilibria. This is the same argument you're making, and it's correct.

(A clarification: in my last comment, I gave the impression that the Taylor principle rules out undesired equilibria by making them violate the transversality condition. That's wrong. The transversality condition only rules out explosions in real debt; nominal explosions can be consistent with the transversality condition. Thus, using the Taylor principle to achieve 'determinacy' does indeed require imposing an additional ad hoc terminal condition over and above the transversality condition. In the literature, some scholars have suggested equilibrium selection policies that do 'achieve determinacy' by committing the Fed to violate household optimality conditions out of equilibrium. Those proposals are obviously nonsensical.)

Nick said: "This is how I interpret Roger's work..."

Check out the new NBER working paper by Farmer and Kharamov, "Solving and Estimating Indeterminate DSGE Models," and see if you think it has any bearing on this discussion. (Actually, their method does not address the issue of explosive equilibria. In that sense it is less relevant than I initially thought, though the basic idea -- letting the data tell us what the right theory of expectations is, conditional on the requirement that those expectations be rational -- is there. It is directly relevant for the indeterminacy discussed in Cochrane's new paper, where there are multiple locally bounded equilibria.)

In general, I too am leery of the approach. However, a defender of Farmer might say that at least he is not obfuscating the issue. We have a model that is good in many respects but features equilibrium indeterminacy. What should we do? We can either select a unique equilibrium by engaging in trickery, which is the standard NK approach; or we can say "The model has many equilibria, and in reality we can only be in one of them. Which one is supported by the data?" That's Farmer's preferred approach, as I understand it.

notsneaky said: "[I]f the real world has multiple equilibria then it's silly to force the models to have only one."

A rational expectations equilibrium is a pretty robust thing. It imposes on all agents a consistency of plans and of expectations for all contingencies from today into the infinite future. If we are living in a rational expectations equilibrium, then we will only ever observe the playing-out of that particular equilibrium. Even if the 'true model of the world' has multiple equilibria, that information is useless to us because we are living inside one particular equilibrium. Once one equilibrium is selected, then that's it. We will never observe another one.

That's why it's hard to see equilibrium multiplicity as a feature rather than a bug. If you take rational expectations seriously, then equilibrium multiplicity is not useful. For example, you cannot explain real-world events by appealing to 'a jump from one equilibrium to another one.' That just doesn't make sense. If agents knew that there were some process by which such a jump could happen, they would form rational expectations about it and plan accordingly, and then you would end up with a completely different model! In the rational expectations paradigm, all observed events must be interpreted as part of a single equilibrium path. That's why multiplicity is annoying.

Alexander: "This is what John Cochrane's 2011 JPE paper was about. He argued that since all of the economy's optimality conditions are satisfied along the explosive inflation paths, there is no economic rationale for excluding those paths as equilibria. This is the same argument you're making, and it's correct."

I am both happy and sad to hear that. Sad because I thought I was saying something new. Oh, well, like my car, it was new to me. And I probably wouldn't have understood his paper anyway.

"We can either select a unique equilibrium by engaging in trickery, which is the standard NK approach; or we can say "The model has many equilibria, and in reality we can only be in one of them. Which one is supported by the data?" That's Farmer's preferred approach, as I understand it."

Then I prefer that approach too, though I still don't really like it.

BTW, referring back to your earlier comment: using the Fiscal Theory of the Price Level to pin down P* is very much like (as we would have said in the olden days) using the Pigou effect to pin down P*. Except the real/nominal rate of interest paid on central bank money is nearly always less than the real/nominal growth rate of the economy, so that government-issued money is net wealth in a way that government-issued bonds may or may not be. Like in Samuelson 1958, where "money" would pay a rate of interest less than the growth rate if the real stock of money were smaller than equilibrium.

Kevin: "Nick, I can't see that it's any great help to know that P* is defined as long as they target M, since we know that in reality they don't do that. It's like having a model of what would happen if we used chariots for transport."

Maybe the model is wrong. Or maybe it really does put us in a world where inflation and output are indeterminate, and we were lucky up to 2008, then unlucky from 2008 to today, so we ought to switch from r to M so we don't have to rely on sheer luck in future.

Wait, where did this comment come from?:
(it's awesome if it says what I think it says, equilibriums are a joke like the immaculate conception)

"Ever since Adam Smith, we have puzzled over market economies as a self-ordering system. Old Keynesian macroeconomists quite rightly asked: what is it, if anything, that would lead the economy to full employment? To simply assume, as New Keynesians do, that the economy eventually converges on full employment and finite inflation, even though there is nothing in their model to say it should, is to throw away 230 years of economics. It would be like biologists throwing away Darwin, and saying "somebody up there must just like beetles, and bunny rabbits, and us!"

I am quite happy to let the data determine the parameter values for us. I am not happy to let the data choose the equilibrium for us (unless we really do believe it's a pure coordination game like which side of the road we drive on, or whether or not there's a bank run, and the data say that both equilibria do in fact occur)."

Nick,

Can your model not have both r and M and still work? Why must it be one or the other?

Dan: what it says is that *this particular* equilibrium in the *New Keynesian* model, is a joke. Like if an Old Keynesian said "well, my model shows, that even if there is no shock, all levels of unemployment between 0% and 100% are an equilibrium, but I'm just going to assume the equilibrium is 7%, because 7% unemployment keeps inflation stable, and that the economy always goes back to 7% unemployment all by itself, somehow."

Frank: No.

Nick

I don't quite get what the trouble here is.

You're saying that the NK Philips curve leads to hyper-inflation or hyper-deflation, unless the CB gets it right.

And then NKs eliminate either explosive scenario w/o even acknowledging it.

So, isomorphically, NKs eliminate CB failure in the long run. In the long run, money is neutral, because monetary policy is right.

So, what's the trouble here?

Ritwik: "You're saying that the NK Philips curve leads to hyper-inflation or hyper-deflation, unless the CB gets it right."

NO! That would be true for the **Old** Keynesian IS curve. I'm saying the NK model, with the **New** Keynesian IS curve, can lead to hyperinflation or hyperdeflation **even if** the CB gets it right.

Let's go at it from Alexander's excellent comment.

"If we allow this, why not just allow the government to rule out undesirable equilibria by threatening to, say, introduce an arbitrage opportunity (which cannot hold in equilibrium) if any of those undesirable equilibrium paths occur? That would be just as legitimate"

Sure. A 'monetarist' interpretation of the NK model says that in a recession, the price of 'money' is too high. That's not just a characteristic of the recession. It IS the recession. So, the CB follows a policy where everyone *shorting* money indeed has an arbitrage opportunity.

What does 'shorting' money mean? Buying stuff, in monetarist terms. Buying risk assets, stimulating investment, in Keynesian-Wicksellian terms. Or a combination of both. Doesn't really matter.

Ritiwk: "A 'monetarist' interpretation of the NK model says that in a recession, the price of 'money' is too high."

That is a common interpretation, but it is a *wrong* interpretation. (And it is not a "monetarist" interpretation.)

Wunderbar, so we're all in agreeement regarding what we're arguing about here. The New Keynesians like Mike Woodford close their model by assuming agents expect stable long run inflation. Nick Rowe and John Cochrane have doubts about whether this is a reasonable way to choose an equilibrium. Is this surprising in some sense? Modern macroeconomic analysis is all about how expectations matter in a fundamental way. So yes, assumptions about expectations may be critical to closing/solving the model you use to think about reality. Old Keynesian models avoid this by ignoring expectations (I'm not even sure if the IS/Lm model I learned in undergard properly distinguished current from expected inflation) or mixing them up with adjustment costs and other forms of inertia in behaviour. I personally think choosing the solution of the model in which agents expect stable long run inflation to be quite reasonable. It does seem to fit with inflation surveys and with the Japanese experience in the last 20 years. But I'm sure Nick can come up with a reason why this is not decisive. Macro data are often indecisive.
re multiplicity as a bug: it isn't a bug if you model an RE equilibrium with sunspot shocks causing you to switch between equilibria, so agents explicitly take into account the posssibility of switching between equilibria. Mertens and Ravn's (2013) analysis of a sunspot equilibrium at the zero lower bound is a good example.
Of course, modern macro theory emphasizes that a prolonged recession or depression can be caused by real shocks and frictions even in a world where the price level is flexible and the Phillips curve is not a structural relation. There are several real business cycle theories of financial recessions/crises or animal spirits, all of which cast doubt on the reliability of Phillips curve reasoning. From the perspective of those theories, the whole debate is badly targetted. You can have long lasting recessions or depressions even with flexible prices. No need to appeal to long lasting pricing errors by firms.

Nick

Understood. Now, the NK model makes even more sense!

Why would Y*(t) *ever* be different from Y(t) as t tends to infinity? I'd think that that's an absolutely normal constraint to place on your model. Indeed, should one not *define* Y*(infinity) to be Y(infinity)? Is there any first-principles definition of Y*(infinity) that makes more sense?

The Wicksellian cumulative process, held together only by history and price-stickiness, seems pitiful by comparison!

Nick

That's why I put the 'monetarist' in quotes. I know you disagree. I wanted to stay on topic and not get into another 'bonds' vs 'money' discussion, so was simply re-phrasing Alexander's observation to show that though an NK model may not say so explicitly, a CB has a benevolent function in the model through an isomorphism of the precise characteristic he noted.

I think the spam filter ate my last comment.

Sorry, am commenting again, but am a little excited.

What is the potential output of an economy destroyed by an asteroid attack or something similar? What is the potential output of an economy that has hit a zero growth steady state?

In both cases, Y* = Y. Asymptotically, there are no other cases. Ergo, Y*(t) = Y(t) as t --> infinity. Solve recursively for current time, using relations of the first time derivative.

Makes perfect sense to me!

A history-of-economic-thought point:

The New Keynesian tradition doesn't reject all the explosive paths because it's some feature of Keynesianism. It rejects all the explosive paths because the monetarists, and then the real business cyclists, rejected all the explosive paths by assumption. New Keynesian theory grew out of RBC, so naturally it rejects them too.

The Old Keynesians did not reject all the paths by assumption. It's why the generation of Keynesian-synthesis American economists - the Cowles Commission people - were so interested in the stability of general equilibrium. And the gen-eq project did not tell us that Y will converge to Y* through relative price adjustment. In fact, it told us the exact opposite: that Y will not generally converge under conventional assumptions. As you've noted before, Nick, the Old Keynesians really did believe that absolute and relative prices didn't stably adjust. They believed in wage-price spirals. They believed in across-the-board wage and price controls. In short, they did believe that the untamed price mechanism could drive the equilibrium into a ditch.

Cochrane is now saying "think about Y. Think about Y's long-term path" because, to be blunt, there's a brief political climate for arguing that Y's long-term path is being damaged by the microeconomic, gen-eq, Okun-gap-sized evils of the Obama administration. This scores him easy victories because, well, there was really never a rigorous reason to assume that Y converges to Y*. We just had a three-decade-long détente between the left-wing and right-wing macroeconomists, where everyone agreed to assume that the Harberger triangles would not add up, just so we could move past the navel-gazing macro theory disputes of the 1970s.

But I think Cochrane has forgotten this, forgotten why we nonetheless began assuming Y* - about what kinds of politics emerges when you start telling the laymen that even you don't really think that markets self-equilibrate in the long run, so they may be unemployed forever. Hint: they don't start deferring to Hayek.

So I guess the Keynesians have gone from "In the long run, we're all dead" to "In the long run, we're all in equilibrium". Or "In the long run, we're all at Y*" This is great news, indeed.

Ritwik: "Indeed, should one not *define* Y*(infinity) to be Y(infinity)?"

Sure, if Y(infinity) is unique. If I'm reading Nick correctly he's saying it's not unique. (Me, I'd say it depends, but I can't offer theorems showing exactly what it depends on.)

Nick: "Maybe the model is wrong."

No "maybe" about it. In the real world new people are born. So we don't have Ricardian equivalence and we do have the Pigou effect you want, as well as lots of other messy complications.

Also, I second David's point. NK models were a Keynesian concession to Lucasians so it's a bit much for the likes of Cochrane, who seems to be a Lucasian of sorts, to be complaining about arbitrary dodges for dealing with multiple equilibria -- motes and beams, guys.

Kevin: or NK macro is a Lucasian concession to keynesians.

Or: it's not New Keynesian macro that's the problem. It's Neo Wicksellian macro. Monetary economics without money. The units are all wrong for starters. It's like buying and selling everything for metres, where nobody defines what the "metre" means, and there's not even custom to help define it. It's just asking for indeterminacy.

Kevin

Actually, uniqueness is not required.

For any given Y(infinity), Y*(infinity) is, by definition, Y(infinity).

What is Y(infinity) anyway? Either 0 - asteriod distinction - or, a 0-growth steady state. The condition holds in either case, and a fortiori, holds for the entire infinitude of forward looking model that assigns probability p to the first case and 1-p to the second.

If Nick, or anyone else, can give me a better/more intuitive definition of Y*(infinity), I'll reconsider.

Ritwik: ask Gali, or one of the Neo-Wicksellians, who must mean *something* by Y*, because they assume that Y(t)-Y*(T) approaches zero in the limit as T approaches infinity. (But actually, they could get around your case quite easily, by replacing Y* with E(Y*), and it works fine for them if Y gets to Y* next year, or in 20 years, or 200 years, or whatever, as long as they can *just assume* that it does get there at *some* future date.)

Back much later.

As a corollary point, consider Nick's point :

"Define P* as what the price level would be, Y* as what real output would be, and r* as what the real interest rate would be, if all prices were perfectly flexible and there were no nominal rigidities.

Monetarist models, and Old Keynesian models like ISLM if the central bank targets a nominal variable like the money supply, have a well-defined P* and Y* and r*. If prices are sticky, and the money supply is too low, actual P will be above P* and actual Y will be below Y*"

I disagree. That IS/LM and monetarist models talk in terms of Y* and P*, but that does not mean that they have a 'well defined' Y* and P*. In fact, an IS/LM or Monetarist model's implied definition of Y* is circular or boot-strapped, i.e. it is that Y which would hold if the CB didn't screw up. But what *precisely* is that Y? No one knows. Just take a recent episode where the consensus of economists believes the CB didn't screw up, and presume that that Y is Y*.

This doesn't seem any more of a *determinate* equilibrium to me than assuming that Y(infinity) = Y*(infinity) and cascading it recursively from there. Nick, old school Keynesians and old school Monetarists, like history. New Keynesians like the future. One can make a choice between the two, but there's no difference in the uniqueness or determinacy of equilibrium here.

Nick

Precisely. I'm not challenging Gali. I'm agreeing with him, and indeed, providing him a qualitative justification for his assumption, should he ever choose to seek one from a no-name commenter in the blogosphere!

E(Y*) is not a 'get-around'. It's precisely what I'm thinking as well. Yes, Y gets to E(Y*) at *some* future date. If you want to admit a class of models where under some circumstances it never gets there, shouldn't *you* have a view on what E(Y*)(infinity) could be, which is independent of Y(infinity)?

All models need an anchor to bootstrap to.

IS/LM and monetarist models, implicitly, boot-strap Y*(current) to Y(current) during their best estimate of a recent historical CB/gov't success. Which still doesn't help them solve the model, except in the most trivial static sense, and so they need implicit bootstraps of y* and y as well, as cued in any of the several versions of the statement we've all heard ad-infinitum 'the trend growth of the US NGDP is 5% per annum'. (Is it, really? Does the unsophisticated act of extending the recent past to the future make the present equilibrium in my model logically unique or determinate?)

NK models, implicitly, boot-strap Y*(infinity) to Y(infinity), or perhaps Y*(200) to Y(200). The NK boot-strap seems far more logical to me.

If I'm understanding you, you aren't saying the NK models are incompletely specified. A model always has constraints in addition to the Euler equations. You seem to be criticizing the particular constraints necessary to stabilize the model on aesthetic grounds and saying that a better model would require fewer or more justifiable constraints.

Can't you make the counter-argument that it is only reasonable to compare the behavior of complete models?

This discussion is proving very helpful to me. I hope the same is true for others.

I am beginning to grasp the deep conceptual unity among a bunch of seemingly different issues. The fundamental point is: how we specify fiscal policy determines whether we achieve nominal determinacy.

- In the standard NK model, the monetary authority fixes the nominal interest rate. Some level of nominal government debt is inherited from the past. Fiscal policy is then specified as follows: whatever the expected path of the price level, the government will adjust the path of lump sum taxes so that real government debt equals the expected present value of future surpluses. Nominal indeterminacy comes from the fact that the fiscal authority will accommodate any price level sequence; given this accommodation, the rest of the model does not provide enough restrictions to pin down the price level uniquely.

- In the fiscal theory of the price level (FTPL), the government does not offer this accommodation. Government liabilities appear as net wealth to households, since tax policy does not have an offsetting effect. Since the fiscal authority's behavior does not ensure that the government's intertemporal budget constraint is satisfied, the price level must adjust to ensure it. This pins down the price level path. (It also has implications for nominal interest rates; if there is an independent monetary authority, it must accommodate fiscal policy in order for an equilibrium to exist.)

- I believe that Pigou effects can also be understood in this way. Money is just another government liability in these models; whether government liabilities are regarded as net wealth by households depends on whether the fiscal authority is expected to offset the wealth effect via an adjustment in taxes. Whether money is dominated in rate of return by bonds does not, by itself, change this fact. Sargent and Smith (1987, AER) provide an example of a monetary OLG model in which financial frictions make bonds dominate money in rate of return and yet many price level sequences are consistent with the same real equilibrium allocation as long as fiscal policy adjusts appropriately.

What does this mean?
1. It is fiscal policy that determines the admissible price level sequences. If we specify fiscal policy in such a way that it will adjust to accommodate any price level path, it is no surprise that we get indeterminacy! This is what the standard NK model does.

2. In the basic NK model, the fiscal policy adjustment required to offset the wealth effect of a change in government liabilities is very simple: just adjust the lump sum tax to balance the budget. In the more complicated model of Sargent and Smith (1987, AER), the fiscal policy adjustment necessary to offset wealth effects is much more complex and involves maintaining a constant wealth distribution across different types of agents. It may be unrealistic to think that a real-world government would conduct such complicated fiscal adjustments to offset the wealth effects of changes in its liability structure. But is it any more realistic to make that assumption in the superficially simpler New Keynesian framework?

3. Not all models have the property that it is possible for the government to offset wealth effects. Sargent and Smith (1987, pp. 91) say, "To obtain irrelevance theorems seems to require that the structure which generates a demand for government currency be one that impinges differently on different classes of agents." In representative agent models in which the distribution of wealth is irrelevant, it is probably always possible, though I haven't thought that through.

So the take-away point is: we need a better model of fiscal policy-making. We can't just dismiss it as 'politics.' Its effects on equilibria are too substantial.

Does that all sound right?

I don't think any of this stuff is new at all. (e.g. I believe Chris Sims has been calling for a more serious treatment of fiscal policy in DSGE models for years, essentially for these reasons.) Nevertheless, it is useful to me to learn it now.

Alexander.

If "In representative agent models in which the distribution of wealth is irrelevant, it is probably always possible" {"for the government to offset wealth effects"], isn't this an argument either for the inaccuracy of such models or actual irrelevance in the economy?

Given the current controversy over distribution, this would be nice to be clear about. If irrelevance is inherent in the model, then the model can't be used to demonstrate irrelevance in the economy.

Alexander: a preliminary thought:

"- In the fiscal theory of the price level (FTPL), the government does not offer this accommodation. Government liabilities appear as net wealth to households, since tax policy does not have an offsetting effect. Since the fiscal authority's behavior does not ensure that the government's intertemporal budget constraint is satisfied, the price level must adjust to ensure it. This pins down the price level path."

Remember Willem Buiter's critique of FTPL? It went something like this: "Suppose Mrs Smith has a debt of $100, but can only afford to repay $50 in real terms. Therefore the price level will double."

If the government does not print money, because M is not in the model, I don't see how the government is different from Mrs Smith.

Plus, I can imagine a world in which shells are the medium of account, and there's a fixed stock of shells and shells are not a creature of government, and there's a Pigou effect that pins down the price level. (Maybe shells are worn as jewelry and utility is a function of the real stock of shells worn S/P.) In such a world, if the rest of the model was exactly like an NK model, we wouldn't need a central bank setting i. The market would determine i and r, and the fixed nominal stock of shells would make P determinate.

In other words, if we can use something like shells (or government bonds) to determine the price level, the central bank disappears from the model.

Alexander, it is perfectly possible to construct a perfect foresight economic model with multiple equilibria, of one sort or another. And these are multiple equilibria which have empirical meaning. And rational expectations are less restrictive than perfect foresight so if we can have them in PF models we should certainly be able to have them in RE models. If not... well, that's the failure of the modeler, not the ideas.

Specifically:

"A rational expectations equilibrium is a pretty robust thing."

Robust in what sense? What does that mean?

"It imposes on all agents a consistency of plans and of expectations for all contingencies from today into the infinite future."

Yes, I know what rational expectations are.

"If we are living in a rational expectations equilibrium, then we will only ever observe the playing-out of that particular equilibrium."

No. This is where you're stuffing your rabbit into the hat. You are assuming that rational expectations are equivalent to uniqueness. But that's stronger than just rational expectations by themselves. See somebody else's comment above. Or the whole sunspot literature.

"Even if the 'true model of the world' has multiple equilibria, that information is useless to us because we are living inside one particular equilibrium."

No, if that is the true model then we might very well be living in a world which switches between various equilibria (again, you're assuming your conclusion). And sure that *might* mean we can't tell Frank from Barney, but then the conclusion is that... we can't tell Frank from Barney. At that point, having been humbled by the universe, we surrender. We don't make up some nonsense model whose only virtue is that it has eterminacy, however ridiculous it is because it makes us feel good about ourselves (that's sort of the whole RBC agenda in retrospect) .

Now. It might be true that information is useless to us because we can never tell which equilibria we are at or whether we are observing adjustment to some equilibria or even some wacky (even temporary) out of equilibria paths... but that's a different matter all together.

"Once one equilibrium is selected, then that's it. We will never observe another one."

In some omniscient observer at beginning of time sense, yes. In terms of some poor schmuck economist trying to make sense of the data, no. Keep on pluggin'

Here is the part I get confused about:

In a purely accounting sense it's investment that drives the business cycle, not consumption. But here we are talking about models where Y=C. Now, I understand the Old Keynesian rationale for looking at consumption - the multiplier. So you can reconcile small changes in consumption with large swings in investment via 1/(1-MPC) and all that (adding in some animal spirits) and hence large swings in output. But. The NK models, one way or another, have the PIH and Ricardian Equivalence in'em. So that multiplier has to be small. And that leaves little room for C.

Or slice it another way. If you ran up to any famous economist who's walking down the street and slapped them upside the head and said "Hey! Tell me! How elastic is consumption with respect to the interest rate?" (I do this all the time, mostly to homeless people on my block, before I transfer a dollar or two of my purchasing power to'em) their off the cuff reaction would be "uh...it's inelastic, leave me alone you crazy person!". And then you say "so how come the Fed cuts interest rates to fight recessions?" I can attest with 100% reliability that even that guy who sleeps under the bench outside my neighbors' doorstop thinks it has to do with firms' investment rather than consumption.

So why are we talking about Y=C models? Maybe it's the I(r) function which is the key. You know, the these days homeless accelerator model?

Thanks Nick. I enjoyed reading the back and forth in the comments on this one.

I offer the follow-up:

http://research.stlouisfed.org/wp/2008/2008-013.pdf

notsneaky: one response to your second comment:

Yes, in one sense, the multiplier in NK models is very small, because of the PIH. But in another sense it's very large -- infinite in fact.

Start in one of my ugly equilibria where C(t) = 0.5Y* for all t. Hold r(t) = r* for all t. Now give each agent $1. The individual agent examines his transversality condition, and decides to consume C(t) = 0.5Y* + r.$1/P(t) each period. And r.$1/P is a very small number. But then he realises that every other agent got $1 too, and will be planning to do the same. So he revises up his own permanent income by r.$1/P. Next he revises up his consumption by the same r.$1/P. Then he realises again that every other agent will do the same....So C(t) becomes infinite, and the economy explodes into hyperinflation, until $1 becomes zero in real terms.

Jon: thanks. That's an interesting paper.

I can offer a possible resolution at least for part about the boom/bust at the ZLB indeterminacy; it does involve a different model -- one that is sort of an analytic continuation between IS-LM and quantity theory.

http://informationtransfereconomics.blogspot.com/2013/10/resolving-neo-wicksellian.html

The difference between the two cases is whether you're currently in an economy that is well described by a QTM (in which case interest rates drop you get the boom) vs an IS-LM model (in which case interest rates drop and you get the bust).

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad