« New Keynesian multipliers, and the expansionary effects of falling government spending | Main | Minimum Wages as Macroeconomic Stimulus? »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

is it possible that you are using a simpler model that NKers actually use, and NKers using extended models would be giving policy advice for different reasons? I ask (and I feel I am chasing you across blogs, sorry about that) because Robert Waldmann writes:

"New Keynesians have no presumption as to the sign of the effect of real interest rates on the level of consumption. This is because they have some respect for data and there is essentially no evidence that real interest rates affect consumption."

and also:

"fiscal stimulus via (temporarily) higher spending works in the same way [Old Keynesian] — by increasing aggregate demand directly and not just by reducing real interest rates."

which suggests to me he's thinking of an NK model in which the relevant parameters are tuned down so the mechanism (which I *think* is the one that you are writing about) whereby the level of consumption (=output in simple model) is moved around by r(t) has very little power, whilst "higher spending works ... by increasing aggregate demand directly" also suggests the model contains something yours does not ... because that doesn't sound consistent with what I think you're claiming (that what matters is what happens to Gdot, and whether that's achieved by increasing G(t) today and holding G(t) tomorrow, or holding G(t) today and reducing G(t) tomorrow, does not matter). If Robert is right that there is a direct stimulative effect on AD of raising G(t) today in NK Models, then the model he's thinking of must be more than an Euler equation responding to the path of r(t). Meaning that economic stimulus is not just a negative function of Gdot(t) and independent of G(t). At least, that's my guess.

"In Old Keynesian models, with an Old Keynesian IS curve, r*(t) is a positive function of G(t) and is independent of Gdot(t)."

In the OK models I know (most notably the original of the species) you really shouldn't say this. G(t+1), being a component of the vector x(t+1) which represents The State of Long Term Expectation, is exogenous. If that's taken as given, you simply can't change G(t) without also changing Gdot(t). So to describe a variable as being a function of G(t) and independent of Gdot(t) would be simply nonsensical.

It seems to me that Waldmann is describing what NK economists believe about real economies. While Nick and Cochrane are talking about the formal models used by NK economists. The two are not the same.

Saying "I don't really believe consumption is a function of the interest rate" is not a defense of using a model in which consumption is a function of the interest rate.

I think I'm still missing something here. Let's simplify a bit. Suppose we follow the standard Krugman way of doing discrete time NK comparative statics, and say x(2) is the long run steady state value for all state and control variables, including G(2), and that all x(t) = x(2) when t>2. Obviously we can solve for optimal G(2) from the full NK model with government spending in the representative agent's utility function. In this case *assuming that the economy reaches its steady state in period 2* the standard NK prescription at the ZLB is to raise government spending, so G(t+1) -G(t) falls, as it should and as you derived. This is basically what Krugman finds in his brief paper 'Optimal Fiscal Policy in a Liquidity Trap' on his blog.

So then the problem collapses to an earlier one which you identified, about whether NK models have a tendency to reach their steady state values. This is a sticky issue for the NK model. But most of them are comfortable with studying the dynamics when this long run equilibrium does exist because this seems to appropriately describe reality. I.e. this Brad DeLong post http://delong.typepad.com/sdj/2013/10/you-dont-need-a-rigorous-microfoundationeer-to-know-which-way-the-well-to-know-much-of-anything-really.html.

And this, I think, resurrects the NK fiscal-stimulus-in-liquidity-trap proposition

Hi JW

No, I don't think so. He's writes about the influence of interest rates on consumption being small in calibrated NK models because that's how you have to pick parameters to fit the data. So he's talking about the formal models used. His post is at Angry Bear.

Although on second thought it would be odd if a model of monetary policy gave no power to the real interest rate, so perhaps the mechanism Nick is talking about is simply relocated elsewhere than consumption, most likely investment. So the question is whether standard NK models contain any other mechanisms than the household Euler equation facing an interest rate path, that would mean NK modellers can advise raising the level of G for other reasons than its effect on Gdot. Perhaps simply allowing the government to run its own sector of production, rather than just having G as a transfer is enough? I am embarrassed not to know the answers to my own questions already.

Ah I saw your post on John Cochrane's blog and told myself to go check your blog because it seemed that you may have something to add. I know that Krugman read your blog since he mentioned you several times and I hope that at least somebody will respond. Because you seem to be right, and I feel that it is very important to go through this in more detail. I have nothing else to add just to say keep up the good job and hang on there.

PS: It is awesome that Cochrane reopenned this and he even linked to a paper where Dupor and Li are discussing this here: http://johnhcochrane.blogspot.com/2013/10/dupor-and-li-on-missing-inflation-in.html. You may have had more impact on macro discussion with your blog than you give it credit for. John said some weird stuff here and there but we may yet see Cochrane pulling off "Kocherlakota" move and he may start to consistently make sense.

JH: I'm with you.

But suppose a politician said: "To escape the ZLB, we need to start cutting G from now on, and only stop cutting G when the economy lifts off the ZLB."

Would any macroeconomist say that the politician is supported by NK macro? Would any student of NK macro recognise that as a policy option for escaping the ZLB? I don't think they would. Because the NK policy prescriptions are always framed in such a way that nobody sees that it would indeed follow from NK macro. It's all in the spin. That spin may well be unconscious, and I think it is, but it's still spin.

Kevin: it is true I am comparing simple OK models and simple NK models. But I do not remember any OK economists saying that it is Gdot, more than G, that matters for AD. And I especially don't remember any OK economist saying you need to *reduce* Gdot if you want to increase AD.

Your clear characterization of the NK and the OK models is awesome (as usual!). I have a couple of quick comments.

1. Krugman acknowledges that the NK microfounded approach is motivated by the desire to show that EVEN with the assumptions of REH, PIH, etc. fiscal policy can be effective. IMHO, that is an implicit concession that he doesn't actually believe in NK models. "Many people who do such models consider this a useful strategy, but remain open to the possibility that given real-world imperfections the classic story (I think by this he means OK) also has explanatory power — especially since empirical multipliers do seem to be more than 1." And DeLong's views are almost identical. Consequently, I think the reason they do not respond to your criticisms of NK is that they largely agree with those criticisms. They acknowledge that they support NK for 'strategic' reasons and would prefer to revert to OK or some modernized variant that's not standard NK.

2. One possible reason why NK would not recommend continuously decreasing G (even if it has the same impact on Gdot as temporary increase in G) is that at the ZLB, the government can borrow cheaply and to the extent that (a preapproved set of) projects can be moved forwards or backwards in time, it makes sense to lower Gdot by raising G(t) and lowering G(t+1).

Side note: in your text, I think you meant to write G(t+1)-G(t) rather than G(t)-G(t+1) in two separate places.

primed: Ga! Two math/typos now fixed! Thanks. (I really should not be allowed to do even simple math.)

primed: thanks!

1. Hmmm. You may have a point there. If they made the weaker claim that this policy is "not inconsistent with NK models" (or something like that) that would maybe get around it. But I can't help thinking that a policy of steadily reducing G while at the ZLB is even more consistent with NK models, especially when we recognise that G isn't really a jump variable. In practice, increasing G means increasing Gdot for a year or so, given the absence of shovel-ready cliches.

I've been thinking about whether these results would change if we made half the agents hand-to-mouthers. Clearly it would make a big difference for tax policy, which would now work. But I don't think it would have any effects on the policy for G, but I'm not 100% sure about that.

2. Yep. There are good micro reasons for increasing G when r is low. And what I previously called the Preponed government spending multiplier (it's 2), which is a mix of increasing G(t) and cutting G(t+1) has a lot to be said for it. (I was still thinking in discrete time when I wrote that old post).

Luis: "is it possible that you are using a simpler model that NKers actually use, and NKers using extended models would be giving policy advice for different reasons?"

I don't think that's the explanation. The NK model Paul Krugman uses (in Robert Waldmann's link) is equally simple.

If C really doesn't depend on r at all, and there is no investment in the model, it is not a NK model. It's the OK Keynesian Cross model. More sophisticated NK models also have an investment-Euler equation, and I think (not 100% sure) that would give the same results as I have here.

1) I don't believe it's valid to break income into permanent and temporary quite the way you do. To quote Simon Wren-Lewis

"To explain, these consumers look at the present value of their expected lifetime income, and the income of their descendents if they care about them (hence infinitely lived). This has two implications. First, temporary shocks to current income will have very little impact on NK consumption (it is a drop in the ocean of lifetime income). The marginal propensity to consume out of that temporary income (mpc) is near zero, so no multiplier on that account. Second, a tax cut today means tax increases tomorrow, leaving the present value of lifetime post-tax income unchanged, so NK consumers just save a tax cut (Ricardian Equivalence), whereas OK consumers spend most of it."

We have good reason to believe this model is wrong.

1) Consumers aren't infinitely long-lived and there are good reason to discount future income. Besides, "permanent income" is subject to revision. It's bizarre to talk about something continually revised as being always expected to be permanent and, in fact, permanent is often treated as synonymous with long-termed.

2) There's good evidence that some portion of consumers consume current and not permanent income as in

http://scholar.harvard.edu/files/mankiw/files/permanent_income.pdf

For instance a medical student can't possibly consume based on permanent income. Credit constraints prohibit this.

This would rule out Ricardian equivalence.

Maybe I am missing something, and I have only read two NK models in my life, but what you say do not seem controversial.

Assume that the world ends in ten time periods, that potential Y is 20 in each time period and that, initially, that would be divided into 10 G and 10 C for each time period.

Now, say that we will cut G by one each time period, i.e. G(t)-G(t+1)=1.
Lifetime C increase from 100 to 150, but you still only want people to consume 10 the first time period and 20 the last. The r(t) path that make C(t) consistent with this, i.e. that makes people hold of their consumption, must be way above that which would make them consume the same amount each time period.

If people that in the alternative state would get paid by G now make an smooth transition to the private sector, and potential Y is independent of G, this seem like a pretty natural outcome.

But I am sure that I misunderstand something.

I think primedprimate correctly characterizes Krugman's views. But I do not think this is a defense of Krugman. The choice by people like Krugman to adopt a mode of analysis they do not believe in for strategic reasons is a moral failure and an intellectual disaster.

the problem collapses to an earlier one which you identified, about whether NK models have a tendency to reach their steady state values. This is a sticky issue for the NK model. But most of them are comfortable with studying the dynamics when this long run equilibrium does exist because this seems to appropriately describe reality.

But you can't DO that!

Suppose we're trying to understand how planes are able to fly, and I propose that it's due to "altitude inertia." Plane-like objects tend to remain at their current height, in my theory, so an airborne plane will remain airborne. Now you ask the obvious question: How does the plane become airborne in the first place? Would you be satisfied with my answer if I said, "This is a sticky issue for the altitude inertia model. But I'm comfortable with studying the dynamics given that the plane somehow becomes airborne, because planes do take off in reality."

If there is some fact that is not predicted by your model but is important for the phenomenon your model is supposed to explain, that is a problem for your model. You don't get to say, Well we know it's true, so we'll ignore the model on that point.

"But you can't DO that!"

Really? I guess that the usual equations describing supply and demand (as well as pretty much any other micro model) are useless then, since they don´t describe how prices and/or quantities would adjust to the new equilibrium.

PS: Also: "The choice by people like Krugman to adopt a mode of analysis they do not believe in for strategic reasons is a moral failure and an intellectual disaster."

Well, either you play by the rules or you exit the game (I chose the later option). Maybe Krugman could get away with it, to some extent, but playing by your own rules usually is not an option.

Nick: "...I especially don't remember any OK economist saying you need to *reduce* Gdot if you want to increase AD."

Keynes did point out that if fears concerning the growth of government are depressing the animal spirits of investors, then it may be necessary to appease them with cutbacks. Of course his concern was with investment and there's no investment in the basic NK model.

Really, I think you're just muddling a bunch of different issues here:

1) Mathematically, if u is a function of x, which in turn is a function of z, it's just a matter of convenience whether you think of u as a function of z. Care is required but there's no issue of principle involved.

2) In terms of the history of thought, you know better than I do why investment, which had the leading role in OK models, disappeared completely from NK models. That doesn't mean these guys suddenly decided it doesn't matter.

3) The NK model works according to Lucasian rules. It's not really legitimate to say "let's change E[G(t+1)] and see how that affects C(t)." If we're doing it right, {G(t)} is a stochastic process. You've made this point previously yourself, but here you're ignoring it.

Honestly, does anyone think the government could curb a consumer boom by saying "listen up folks, we plan to massively increase G(t+k) for all k>0"? You imply that NK economists must believe that to be the case. I think not.

Nick,

I have been looking at Woodford's Simple Analytics of G Multiplier, and he starts with a neoclassical benchmark in which:

Y=f(H)

output is solely a function of labour.

Y=C+G

output is either private or government consumption. So I think this means G is a produced good, not a transfer.

He then shows how (I think static) equilibrium conditions, equating marginal utilities of consumption and labour with real wages can be expressed in terms of an efficiency wedge. This is the mark-up in case of monopolistic competition but can be interpreted more broadly and originate from various sources.

First he shows dY/dG is a function of utility elasticities with respect to Y, being positive but less than one in the flexible prices case. Then he writes: "A different result can be obtained, however, if the size of the efficiency wedge is endogenous. One of the most obvious sources of endogeneity is delay in the adjustment of wages or prices to changing market conditions" He says if prices and wages do not adjust in full proportion to changes in the marginal rate of substitution between leisure and consumption, you get a multiplier greater than 1. "... the degree to which the efficiency wedge changes depends on the degree to which aggregate demand differs from what is was expected to be when prices and wages were set... we must consider the effects of government purchases on aggregate demand."

I apologise if I am teaching you to suck eggs here, that's not my intention and I expect you know this stuff. And there may be a story about inter-temporal substitution and interest rates hidden in there, which I cannot see. But this looks to me like Waldmann's direct effect of G on AD and is thus potentially another reason why NK modelers might advocate an increase in the level of G that is not to do with the implications for Gdot.

Peter N: Sure, we can easily change the NK model by assuming some "hand-to-mouth" agents, who consume their disposable income each period. That means Ricardian Equivalence is false. So tax cuts now work. But how it affects the results for G is a different question.

nemi: I don't think you are misunderstanding anything.

JW: "But you can't DO that!"

Agreed. (But notice though I am doing it here! Like the NK modellers, I am ducking that whole indeterminacy question in this post. I am just assuming that if r(t)=r*(t) for all t, then C(t)+G(t)=Y*(t) for all t, when all we really know is that we have satisfied a necessary condition for this. I faked it by only looking at Cdot+Gdot=Y*dot.)

nemi: there's more to it than that. See my previous post and the link when I say permanent income is indeterminate. Using your supply and demand analogy, the NK model has a well-defined "supply" curve (from the Calvo Phillips Curve), but the "demand" curve, even if you put r on the vertical axis, is not well-defined. There's a horizontal IS, and a horizontal LM.

Kevin:

1. Suppose policymakers cannot make G(t) jump. (I'm making that assumption just for illustration, though it seems realistic.) They can only control Gdot(t), which must be finite. You suddenly find yourself at the ZLB. What do you do? OKs say raise Gdot. NKs say cut Gdot.

2. I think they initially ignored I for simplicity. And when they put it back in, it was in an investment-Eluer equation that looked rather like the consumption Euler equation. So I don't think it will affect what I say here (but I'm not 100% sure).

3. I could re-write what I say here in terms of Lucasian rules. Let n be the rate of time-preference proper, and suppose there are shocks to n. Should we adopt a rule where G(t) is a negative function of n(t), or Gdot(t) is a positive function of n(t)? OKs say the first, NKs say the second.

"Honestly, does anyone think the government could curb a consumer boom by saying "listen up folks, we plan to massively increase G(t+k) for all k>0"? You imply that NK economists must believe that to be the case. I think not."

That is exactly what the NK model does say. Maybe nobody believes the NK model. Fine. Stop using it to defend the policies you do think will work. No more cherry-picking.

Luis: with flexible wages and prices we can get supply-side effects of G in Y. For example, if G is useless, an increase in G will increase Y through an income effect on labour supply (people work more hours in total when they are taxed to work 2 hours per day to build the pyramids.) That's the first bit of what Woodford is saying. That's not on the agenda here. Then he says that *if* G affects AD, and prices or wages are sticky, there will also be an AD effect. That's what we are talking about here. But we are looking at the relationship between AD and G and Gdot. Is it G or Gdot that affects AD in a NK model? And is it positive or negative? The answer is Gdot, and negative.

Nick,

I am lost, and will have to read his paper again, right now it still looks to me like Woodford is saying G affects AD. Anyway, thanks for responding.

Me: Honestly, does anyone think the government could curb a consumer boom by saying "listen up folks, we plan to massively increase G(t+k) for all k>0"? You imply that NK economists must believe that to be the case. I think not.

Nick: That is exactly what the NK model does say.

If you want to show that you've got a bit of work to do. My intuition FWIW is that a policy rule which involves changing the growth-rate of government purchases in response to exogenous shocks would have the effect of increasing the volatility of consumption. Now I may be wrong, but I surely don't think it's fair of you to say "they can do the math much better than I can" when the math in question involves writing a policy rule for {G(t)} and deriving the the variance of {C(t)}. That stuff's hard, even for guys like Simon Wren-Lewis. Benassy has done that sort of thing for OLG models, but AFAIK he hasn't tackled the NK model from that point of view.

It's not enough to just look at the Euler equation and say that lowering G(t+1) is the same as raising G(t). It's an infinite sequence and we're talking about an immortal household which plans for all eternity.

"Maybe nobody believes the NK model" you say? I should hope not! But it's not about what we believe, it's about how we extract the fangs of the Lucas Critique.

Luis: he will be saying that. And he's wrong. If he looked at his own model carefully, he would see that it is Gdot, not G, that affects AD. And that Gdot affects AD negatively. (And if that's the same paper I think I remember, he makes a real dog's breakfast over it. Because he's looking at the effects of a permanent increase in G, and misses the whole indeterminacy issue. He's got a horizontal IS curve, and horizontal LM curve, and he thinks a permanent increase in G shifts the horizontal IS curve right, which it does, but which is stupid, and then he goes off into a bizzarre tangent in a footnote because the long run Phillips Curve slopes up due to a glitch in the Calvo Phillips Curve. That paper is a nightmare.)

Kevin: remember in the olden days when we used the Keynesian Cross model to solve for G that kept Y at full employment Y*? We assumed Y=Y*, then solved the model for G. Let's do the same for the NK model, because it's easier (and lets us duck the whole indeterminacy issue).

Assume C(t)+G(t)=Y*(t) for all t.

Take the derivative wrt to t, to get:

Cdot(t)+Gdot(t)=Y*dot(t)

Cdot(t) is an increasing function of [r(t)-n(t)], by the consumption-Euler equation, where n(t) is the rate of time preference, and r(t) is the real rate of interest.

Solve for Gdot(t). It will be a negative function of [r(t)-n(t)]. (It will also be a positive function of Y*dot(t) too)

That means if the central bank sets r(t) too high (maybe because of the ZLB) in some periods, Gdot(t) will need to be lower in those same periods.

I think I've got the math right.

Nick,

quite a claim ... get thyself a mathed-up coauthor and write that paper!

Luis: but math is the problem! I can't do math. I got a D in A-level math. I barely passed the math course for MA economics, and I have forgotten most of what I did learn there. I do not know what "real analysis" even means. So why is it some guy like me who has to figure this stuff out? Because all the smart kids are blinded by the math!

I don't need to write a paper. These last two blog posts say what needs to be said. (I might do a third, saying it in pictures.)

Nick,

I was suggesting outsourcing the maths to a co-author. I guess your career concerns may differ, but some young clever econ-maths whiz would surely love to get a "Woodford doesn't understand his own models" paper published.

P.S. Ha! I got a C in A-level maths. I feel tremendously superior.

Nick:"there's more to it than that ... the NK model has a well-defined "supply" curve, but the "demand" curve, is not well-defined."

I understand that you claim that the equilibrium is undefined, but it does provide the dynamics which would take us to the assumed equilibrium.
With standard supply and demand curves, the equilibrium is defined, but they do not provide the dynamics which would take us there.
My point is that every economic model that I ever seen simply assume 99 % of the outcome/process/whatever that they are supposed to show. That is the standard, which you can complain about (and which I have complained a lot about) as a general issue, but it seems unfair to bring it up as a point against a particular model.

Kevin: "Honestly, does anyone think the government could curb a consumer boom by saying "listen up folks, we plan to massively increase G(t+k) for all k>0"?""

Obviously it's not realistic in our world of agents who are irrational, short-sighted, mortal, liquidity-constrained, rightly distrustful of government commitments, and so on. But in a world with infinitely-lived, non-liquidity-constrained, perfectly rational agents and perfectly credible government commitment (and where the government's long-run borrowing rate is at least equal to the growth rate), I think it would work. This is basically a tax increase. The government may not be collecting the taxes now, but rational, infinitely-lived, non-liquidity-constrained agents will respond immediately to the recognition that their permanent disposable income is being reduced. (I'm assuming that C and G are not substitutes in the utility function, and of course I'm making the usual assumption that we are near full employment in the long run.) If the infinitely summed present value of G is going up, then some tax is going to have to go up, too, eventually. I don't think you need to bring in r* to make this point.

Now that I think about it, I don't think Nick's point here is as profound as it seems at first. Could you raise AD today by cutting future government spending? In the idealized world, of course. Cutting future government spending is just like a tax cut if C and G are not substitutes and the economy tends toward full-employment. You can state the case in terms of r*, but I don't think we need Wicksell or Woodford for this: Friedman and Barro have already made the case in Old Keynesian terms. Of course New Keynesian models have the same implication. Surely Woodford realizes this?

To say that Gdot is negative is to say that current G exceeds permanent T. Take PIH and Ricardian equivalence and stuff them into an OK model. Done.

(more or less repeating what I said in the last comment, I guess, but...)

I don't the phenomenon Nick discusses here is a difference between OK and NK models. Rather it is a difference between models with Ricardian equivalence (along with a binding government budget constraint) and those without. Many NK models break Ricardian equivalence (most typically by adding liquidity constraints), and in that case we're back to dC/dG>0 regardless of Gdot. And if you add Ricardian equivalence to an OK model, then we get dC/dGdot<0 regardless of G.

That derivative get slipped and that sign reversed a lot in presentations of the NK model. Same thing goes on with consumption.

Andy, I largely agree with you, especially about the fact that it's mostly Ricardian equivalence (with PIH or similar) which is generating the effects which Nick Rowe and John Cochrane see as NK departures from the OK model. It's not about RatEx or monopolistic competition, which are real differences between NK and OK.

But aside from that, I question whether Nick's proposed fiscal rule would work in quite the way he thinks even in the NK world of immortal agents. I need to look again at Gali's model in order to be sure. Nick's "solve for Gdot" approach doesn't look conclusive to me. It doesn't answer the question: will the resulting rule stabilise Y better than the more obvious Keynesian rule, i.e. will the variance of Y be lower? If not, then it's a stretch to suppose the government can commit to it.

Where this conclusion may or may have been more relevant is in some former Warsaw Pact economies, post 1990. Some had unemployment rates at <3% and too much gvmt inefficiency mostly as a result of corruption. Here it is almoost the opposite despite having the most efficient banking secotr throughout the crisis. Our private finance and oil corporations have become corrupt. Simply due to a lack of economic sector diversification. If you are playing the macro game, and suggesting cutting public spending, you need to get an ROI of marginal public and private spending/savings. I suggest the lack of ethics and philosophy, the lack of critical thinking of petro execs, makes giving any more money to them, a bad thing. I suggest that since AB is cutting their U of Cgy course offering and enrollment, despite the windfalls and lack of carbon tax, that the entire economic policy of this nation should be directed towards hitting them over their heads with books of liberal arts, chemistry and heat physics texts.
I like the idea of permanent income when considering measuring the quality-of-life of this species. We won't be so dynamic at some point, and we will settle in for the longer haul. The post-IR quality-of-life resembles temporary income shocks so far. Our engineering aptitude can't be tooooo high in the future, so quality-of-life approaching a limit rather than permanently increasing.

Jordi Gali, about as NK as it gets, has an interesting paper on how Government Purchases impact Consumption: http://www.ecb.europa.eu/pub/pdf/scpwps/ecbwp339.pdf

nemi: Compare the OK ISLM with the NK model. The OK model has a well-defined LR equilibrium and a well-defined story of how you get (or do not get) there. The NK model has the latter but not the former.

Andy and Kevin: agreed, it's not monopolistic competition. That's fine, and you can marry monopolistic comp with an OK, NK, or Monetarist story of AD equally well. Nor is it really REH.

The problem is when we mix two things: PIH, plus the CB setting r. That's a toxic mix for determinacy. And PIH is what's giving us the weird fiscal results. The Consumption-Euler IS is just PIH in math.

notsneaky: so I'm right, but not original? And some NKs have said you need to cut Gdot to increase AD and get off the ZLB?

"Assume C(t)+G(t)=Y*(t) for all t.

Take the derivative wrt to t, to get:

Cdot(t)+Gdot(t)=Y*dot(t)"

Why would the policy goal be to increase r(t)? I think the policy goal would be to increase Y which would call for an increase in G. Why increase r(t) - just because of the ZLB? Why target r(t) when what the policy maker really cares about is Y?

"I have never heard a single macroeconomist recommend reducing Gdot(t), and saying that the New Keynesian model supports this policy recommendation."

This seems to indicate that macroeconomics is overly politicized. But then again, so does the history of macro thought, which seems to be little more than schools of thought arguing with models whether there should be more or less government intervention in the economy.

"And at least try to present the policy recommendations of our models without spin."

I was not aware that there were any macroeconomic models that have achieved forecasting accuracy reliably enough to warrant making any policy recommendations. If I were considering launching a rocket, and the predictive reliability of physics models matched that of economic models, I would cancel the project and also tell engineers to stop trying to building bridges and skyscrapers. And I would do this regardless of how much complicated math the physicists were using, as that proves nothing.

That economists still try and make policy recommendations anyway seems to indicate that macroeconomics is overly politicized. This happens on both sides of the aisle. RBC models were also highly unrealistic in voluntary unemployment and relying upon negative technology shocks. I don't think that stopped them from making policy recommendations either, and for some reason, they didn't make recommendations like public awareness campaigns to reduce the public's overly high preference for leisure.

Brendon: from a very quick skim of the "non-technical intro" to that Gali paper, (because I would hate to read through the math) here's what I *think* is going on:

His "rule of thumb" agents are what I call "hand-to-mouth" (HTM) agents. They consume their current disposable income in all periods.

Let T(t) be the level of taxes. If G(t)=T(t) for all t, introducing some HTM agents into a standard NK model makes no difference to the results I have discussed here. But if we cut T holding G constant, then (obviously) HTM agents will increase C, while standard NK agents will not (due to Ricardian Equivalence). A tax cut will increase r*.

Ricardian agents create an argument for cutting Gdot to raise r*. HTM agents create an argument for cutting T to raise r*. If we have both agents in the model, we have an argument for doing both. We still don't have an argument for raising G.

In other words, Gali's model would provide the intellectual support for a politician who said: "To escape the ZLB, we need to cut the level of taxes immediately, and reduce the growth rate of government spending too."

If he says anything else, he is spinning the results of his model.

Kathleen: what we want to do is keep Y(t)=Y*(t) for all t. If the central bank can set r(t)=r*(t) for all t, it can achieve that result (by definition of r*, and ignoring the indeterminacy problem). But if the ZLB means the central bank cannot lower r(t) enough, so r(t) > r*(t), we need to use fiscal policy to raise r*(t) to prevent Y falling below Y*.

This is an equivalent way of thinking about how to raise Y. It just keeps the math simpler.

PG: "I was not aware that there were any macroeconomic models that have achieved forecasting accuracy reliably enough to warrant making any policy recommendations."

The Bank of Canada has been able to keep inflation very close to the 2% target for the last 20 years. That proves that the inflation forecasting macroeconomic model the Bank used to guide its decisions was reliable and reasonably accurate. Or else it just got lucky.

Nick: "If [Gali] says anything else, he is spinning the results of his model."

Another possibility is that he actually did work through all that stuff using the NKPC and DIS equations, together with the policy rule, checked that the eigenvalues were such as to ensure the existence of a stationary solution and found an answer which simply does not accord with your intuition.

Irineu de Carvalho Filho left a comment on JohnCochrane's blog which I'm thinking he could reproduce here with very little change:

"That was an amusing post.

"However, you are wrong. First, new Keynesians acknowledge that many consumers are credit constrained so any model of the reality should have Old Keynesian effects. Second, the Euler equation does NOT determine consumption by itself, but in combination with the inter temporal budget constraint. Therefore, income (and unemployment, transfer policies, asset holdings etc) have an effect on consumption. There are many reasons why Old Keynesian effects should exist in a New Keynesian world."

In my mind if simplifying the math makes the intuition disappear then I say the math is not worth simplifying - and there is something wrong. If policy directed at r(t) is ineffective because of the ZLB then give up on manipulating r(t). If the policy maker wants to raise Y(t) up to Y*(t) then target Y(t) directly and raise G(t).

Nick, OK models have a story about the adjustment, but it is not in the formal model.

Has the Bank of Canada used the same model to forecast inflation over the last 20 years? I don't believe so.

Kevin: I have checked my math. My intuition was half-off. The result with half the agents being hand-to-mouth is even more different from Old Keynesian results than I thought.

Ricardian agents have Cdot(t)=a(r(t)-n(t)), where a > 0

Hand-to-mouth agents consume all their disposable income every period, so have C(t)=Y(t)-T(t), so Cdot=Ydot-Tdot.

Assume half the agents are Ricardian and the other half hand-to-mouth. Then the equation that determines the natural rate r*(t) is:

0.5a(r*(t)-n(t)) + 0.5(Y*dot(t)-Tdot(t)) + Gdot(t) = Y*dot(t)

So r*(t) = n(t) - (2/a)Gdot(t) + (1/a)Tdot(t) + (1/a)Y*dot(t) (I think?)

Which means: if you want to raise r*(t) to escape the ZLB at time t, you want to cut Gdot(t) and/or INCREASE Tdot(t) !!!

Intuition: you need to have growing consumption for the Ricardian agents, which means you need falling consumption for the hand-to-mouth agents.

That result is very very weird. It is very different from ISLM, both for G and for T.

Ricardian agents in innovative or manufacturing industries will consume. Here is a chance to quantify human capital. When rich people represent a background that encompasses reason, R+D, liberal arts, history, science, a broad array of engineering...they will consume more. Consumption for rich people is imagination. Karl Rove was right when claiming Democrats are linked with the attainment of Ph.Ds.
When bankers/finance doesn't believe in stakeholder economic theory, when rich people are able to get wealthy despite little education and landlocked worldy experiences....then you have the two main market failures in North America post-WWII.
My solution here is to build Ricardian agents that do spend at ZLB. Electronics companies, aerospace, materials science, space, wind turbine makers, medical equipment makers, etc....and yes, going aginast neo-Keynesian theory and spending on a 10x CSA at Garneau's control, 10x more funding for Edmonton's NNI, 10x more funding for Saskatoon's synchrotron...

I just tried to get a cellphone to be on call for snowshovelling and needed to have been dumb enough to qualify for past debt. The macro says clamp down on the banks and their borrowing, but I guess NK misses it will be you guys who may be looking after my welfare. I'm done looking for work this yr.

more easily: In Canada our rich people, finance and petro, aren't smart enough to invent their own consumables. Things they would want like their own stem cell banks or robot nurses/escorts or hologram TV or whatever, they are saving cash until someone else invests them. It isn't working class people and welfare bums eating all the milk and bread in the stores that prevents rich people from buying stem cell banks. With a Gini this high, our rich people have cut themselves so much pie they don't know what to do with it. It has been cash accumulating ever since we put the PM who worked for the tar sands once, in charge. I'll be buying pot and proline tickets with my welfare. Enjoy the cash in your bank accounts, rich people. Much of the development of real goods requires more public spending, regardless of what the Ukraine-oriented NeoKeynesian model says.

Nick, a particular line of yours has been rattling around in my brain for two years now, always demanding attention but never getting it, at least not sufficient.

"The economy wants a Ponzi scheme."

Krugman, channeling Summers, got the bones rattling quite loudly today:

http://krugman.blogs.nytimes.com/2013/11/16/secular-stagnation-coalmines-bubbles-and-larry-summers/

"The Bank of Canada has been able to keep inflation very close to the 2% target for the last 20 years. That proves that the inflation forecasting macroeconomic model the Bank used to guide its decisions was reliable and reasonably accurate. Or else it just got lucky."

Didn't inflation hit 4.42% in Canada in 11/2002? I suppose missing a target by over 100% can still count as very close when 2% is such a small number.

More importantly, I'm not sure one central bank staying reasonably close to an inflation target for 20 years is as as good a judge of forecast accuracy as well as the economy actually reliably hitting within a certain window of a model's published output, unemployment, and inflation forecasts.

Steve: I have modified my views slightly. Here is my new post on that topic. It's closely related to Paul Krugman's.

Nick:

I am now a new Keynesian.

So, the model shows that we need to cut taxes now, and then reduce the growth rate of government spending (to gradually bring the budget to balance.)

I love that fiscal policy result!

Surely that is the reason to choose between models of the economy. :)

Bill: But remember, the model says you should be increasing Gdot when you are off the ZLB, otherwise G will go to either 0% or 100% of GDP.

And see my Sept 16 03.33pm comment. The model (with half the agents spending all their current disposable income) says we need to *increase* the *growth rate* of taxes, and cut the growth rate of government spending!

Why doen't the math say that under the ZLB condition, when r(t) can not be influenced by normal policy, r(t) is in effect a constant. Then r(t) disappears from the derivatives wrt time. The policy implications are that Gdot should increase under the ZLB condition if an increase in Y to Y* is desired. I end up with the same policy view as ever: fiscal policy is not effective, in the context of flexible exchange rates, unless the ZLB condition is in effect.

Kathleen: "The policy implications are that Gdot should increase under the ZLB condition if an increase in Y to Y* is desired."

The Old Keynesian ISLM model says that G(t) should increase under the ZLB condition if an increase in Y to Y* is desired.

The New Keynesian model says that Gdot should decrease under the ZLB condition if an increase in Y to Y* is desired.

In both cases Y < Y* because r > r*, and we cannot cut r because of the ZLB so we need to increase r* instead. But they tell us different ways to increase r*.

Are there competing New Keynesian models? I say that having completed 4th year Macro using David Romer's Advanced Macroeconomics text. In the Romer discussion of New Keynesian models he states: ... the most glaring omissions from the model are investment and government purchases (pg 315). There is no discussion of fiscal policy as far as I can tell. When I read Romer, the models make intuitive as well as mathematical sense, but there is only discussion of the Central Bank (monetary policy) and no discussion of the ZLB. I completed this 4th year course only 2 (3?) years ago so I suppose I have a modern version of an economics undergrad education.

Kathleen: I think someone else could give better answers than me. I remember seeing investment put in, but can't remember where. I think they usually omitted fiscal policy and the ZLB until the recent recession.

Do you, or anybody, seriously believe that any consumers (or any number of consumers large enough to make a macro impact) behave in a way causing a positive economic impact from a negative dG/dt? What in the hell are you smoking?

ML: Do you, or anybody, seriously believe that any rhetorical question like that would make a positive contribution to the discussion? Were you too drunk to read the post?

I think Mayson Lancaster's comment is merely a restatement of the point Andy Harless was sort of alluding to earlier when he said: "Obviously it's not realistic in our world of agents who are irrational, short-sighted, mortal, liquidity-constrained, rightly distrustful of government commitments, and so on. But in a world with infinitely-lived, non-liquidity-constrained, perfectly rational agents..."

Models necessarily make simplifying assumption (or, as Box says, "essentially, all models are wrong, but some are useful"). Whether the assumptions are good or not depends on whether they are important in regards to what you are looking at. If a particular implication would not hold if these assumptions are relaxed, then those are bad assumptions, which means this is not a useful model for answering that particular question. That's how you differentiate the model's bugs from its features, right?

Does the implication of cutting Gdot to increase AD hold if you relax the assumption of Ricardian equivalence or lack of credit constraints?

pG: "Does the implication of cutting Gdot to increase AD hold if you relax the assumption of Ricardian equivalence or lack of credit constraints?"

See my most recent post. I have relaxed the assumption of Ricardian Equivalence. And my Old Keynesian agents' consumption function might be motivated by something like credit constraints (very roughly). It seems it does hold. Plus you get the implication of *raising* Tdot!!

Mayson Lancaster *might* have been restating Andy's point, but if so he did it in a very unhelpful way, and out of line with the culture on this blog. (Your restatement, whether or not it was what he was saying, was much more worthwhile.)

But it is still a weird set of results. I wonder if the indeterminacy problem might not be at the root of it.

"By reducing the growth of government spending, and reducing the growth of consumption by the Old Keynesian agents (by growing taxes), the permanent income of the New Keynesian agents is now higher than their current income, and this offsets their reduced rate of time preference, and prevents them wanting to save part of their current income."

Changing the model so that only some of the agents have Ricardian equivalence and others only have a MPC is not exactly what I would call relaxing the assumption, but merely combining it with another assumption that is outdated because it is less realistic. Imagine a frictionless physics model where they change it so that parts of the universe are frictionless and other parts have 100% friction. Is that really relaxing the assumption?

But I honestly thought you hadn't realized the result was weird and imagine Lancaster might have thought similarly (agree his comment was stated in an unhelpful way).

pG: "Imagine a frictionless physics model where they change it so that parts of the universe are frictionless and other parts have 100% friction. Is that really relaxing the assumption?"

Good point. I will think about that one.

"But I honestly thought you hadn't realized the result was weird..."

What's weirder is that Paul Krugman's model (and all the other NK models of fiscal policy) have exactly the same weird result. And they just can't see it! By restating the model in continuous time, it doesn't really change the model at all. It just forces them to see it. It's all in the framing!

Well, if it does derive from Ricardian equivalence, that's the same assumption used in New Classical models which I would suspect have features that override this result. And note, this does argue in favor of my opinion that these models are probably serving an ideological purpose (recall my earlier point about RBC economists not trying to change preferences for leisure even though that is a weird result of their model). As such, I think the profession ought to implement a social science equivalent of medicine’s double-blind studies, employing a little division of labor.

I've stated this idea elsewhere but am curious what you would think. Have one person create the model. Have a second person actually run the model and gather the results, where the second person never directly interacts with the first person, is not told what the variables in the model actually represent in the real-world, and is also not told what results are expected.

Note, if the model or data is such that it would be obvious for that second person to identify the variables, a third person could perhaps do a transformation (would that work?). Also, the second person can also serve as a check on whether the first person still publishes results that they didn’t like as well as also noticing "weird results" like this. As such, it would be better if the two people were always from different schools of thought, but that would making the matching problem a bit harder and could also be gamed. As such, I think an automated blind random matching system would probably work better.

I am not an academic economist and have never created any models, so I’m not aware whether something like this is already in place or whether it would be unworkable. But it’s my impression that the norm right now is for that second person to be the same as the first person, or a TA or grad student of the first person.

pG: it certainly is embarrassing when we only like our models when they tell us what we want to hear. But I don't think it's just politics. I'm a small G guy myself, but advised the PM against cutting G and raising T when the recession hit (and other economists heard me say that). And I'm pleased to see the government preponed some government investment projects during the recession (which is like increasing G(t) and cutting G(t+1), and fits with the NK model OK, if you can raise G(t) quickly.

But I still don't like these results.

But I don't think your suggestion would work. Normally we can see in advance where our models are leading when we build them. This is one of those rare cases where something weird popped out, and nobody saw it coming.

Any ideas how to improve my idea? Knowing in advance where a model is leading would seem even more problematic from a cognitive bias standpoint, where one can build a model with an underlying goal of arguing in favor of a particular policy. Can model-building be broken up into modular pieces like a software project?

When I look at the history of macro thought, I see schools of thought arguing primarily over whether there should be more or less government intervention in markets. That would draw resources away from making models that model the economy and forecast better.

pG: "Can model-building be broken up into modular pieces like a software project?"

That is how it was sometimes done, for the big computer models in the olden days. But then the bits didn't always fit together into a logically coherent whole. There was no guarantee that there exists a possible world in which all those bits could be true at the same time.

This is how it's normally done. You have an intuitive idea, then build a model to try to formalise that idea. If you find your model contradicts your idea, you re-examine both your idea and the model. For example, before I built my hybrid model, I figured cutting Gdot and cutting T would work. Did the math, and found I was wrong about the cutting T bit. See my November 16 7.55 comment above, where I got it wrong. Building the model taught me something. In principle it might be possible to check against the data, but when the policy variables are being adjusted deliberately to try to smooth out the economy, you have the exact opposite of an ideal experimental design, where you toss a coin to decide on policy, and look to see whether the coin toss causes the economy to do horrible things.

Sorry for the delay in response (I have a two-year-old).

Hopefully you see where I am getting at, but let me back up. When I took what my school calls graduate macro (this is not a top-ranked school, so it doesn't teach modeling and is more akin to history of macro thought), I was deeply disappointed. A conservative Classical school arguing with a liberal Keynesian school arguing with a conservative Monetarist and then New Classical school arguing with a liberal New Keynesian school (plus a conservative Austrian School and liberal Institutionalist School etc. on the sidelines). This was nothing like the the steady progression of knowledge I had seen in physics or chemistry, and you also don't see this degree of political polarization in other social sciences like sociology or political science.

What was most striking was that the New Keynesians and New Classical school resolving their methodological disagreement but then still continuing the same ideological argument over the usefulness and effectiveness of counter-cyclical government policy (the same ideological debate as between Keynesians and Classicalists, really). In most scientific fields, one would have expected new disagreements to occur, creating different divisions of people into new schools.

It wasn't until I came across Barbara Bergmann's commentary that the above made any sense:

http://www.jstor.org/stable/25046110

She observed that there was a higher degree of government hiring and political appointments of economists compared to the other social scientists, which, of course, creates incentives upon research. She was referring to the U.S., but I believe it is probably also true in many other nations. While medicine does not seem face the problem of political polarization, it still is concerned with the possibility of bias from medical researchers affecting results due to the placebo effect. Thus, they use double-blind studies.

What is the equivalent of this in economic research? I am not aware that there are any institutions or norms or standards in place to counter the politically polarizing incentives upon the field or the possibility of creating a model to back a particular policy argument. While it is admirable that you yourself do not do this, I think any economist can see that it is unrealistic to assume most individuals will ignore and defy incentives placed upon them (and there are numerous examples of those who have not).

Also, I am not sure how important theoretical consistency is when models include such a high degree of unrealistic assumptions. I would think that more closely approximating this world would far be more useful than creating a possible theoretical world that does not resemble ours very much.

pG: Yes, I see what you are getting at. I don't think your idea for curing the problem will work, but I don't have any better idea that would work either. Closely approximating the world would be nice, but without experiments to distinguish them, more than one theory can, more or less, fit the facts.

On Barbara Bergmann's commentary. Maybe. But then we see exactly the same divides within academic economists, who don't really have much of a financial incentive to grind any particular axe, except to grind some axe that will get them some publications. If you have a comparative advantage in working on a particular theory, simply because that's the theory you've been taught, you tend to stick with it. Unless something else trendy comes along, and you can see a paper you can publish, regardless of whether or not you believe in that trendy new approach. Freshly-minted PhDs have the most to lose from anything that makes their theory-specific human capital obsolete. Old guys like me don't have much to lose any more, though we still tend to argue for "our side".

I don't know about Canada, but academic economists can be and do get appointed to the Fed. Work by academic economists regularly gets cited in the media by politicians. So you would expect the divide to also exist within academia.

Economics is always going to be a field of numerous models. I think that's fine. The problem, I think, is that the usefulness of a model to an economist's career relies upon variables such as whether or not the model is useful to a politician. As I mentioned earlier, this would draw resources away from having models more closely approximate the world.

http://www.voxeu.org/article/failed-forecasts-and-financial-crisis-how-resurrect-economic-modelling

It is always possible to alter incentives. And with that, my son is awake...

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad