This morning I've been in email conversations with several economists who do not understand, or disagree with, something in the Bank of Canada's latest Monetary Policy Report (pdf). Specifically, Technical Box 2. These include some very good macroeconomists.
I think I do understand it. And I agree with the Bank of Canada. So I am going to give my interpretation of what the Bank is saying. I think the Bank is right, and it's an important point that we need to understand.
The Bank of Canada is saying that it expects output to return to normal, and inflation to return to target, before the policy rate returns to normal. And that this would not happen if the Bank followed a simple Taylor Rule, but will happen if the Bank gets policy right. A Taylor Rule is not always the best policy. It may not be the same as targeting the inflation forecast.
What may trouble some economists is that if output returns to potential, and inflation returns to target, shouldn't the Bank make the policy rate return to normal too? The Taylor Rule says it should.
Here's the diagram from the Bank's Technical Box 2. (Thanks Stephen!)
Let me give the intuition first.
The Bank of Canada tries to keep inflation at target. Since there are lags in the Bank's response to shocks, and lags in the economy's response to the Bank's response, the Bank won't be able to keep inflation exactly at target. But it does try to adjust policy to keep its expectation of future inflation, post those lags, at target. It is targeting its own forecast of future inflation
And the way the Bank sees itself as doing this is by adjusting the policy rate of interest relative to some underlying natural (the Bank prefers the term "neutral") rate of interest. If the Bank sets the policy rate so that the actual real rate is above/below the natural rate, output will be below/above potential, and inflation will fall/rise relative to expected inflation and target inflation.
Suppose there is a shock to Aggregate Demand that lowers the natural rate of interest. And suppose it is a permanent shock, so that the natural rate is now permanently lower than where it was in the past. If the Bank observes this shock, soon after it happens, and knows that the shock is permanent, the Bank should permanently lower the policy rate of interest in response. There will be a temporary recession, and inflation will fall temporarily below target, because the Bank did nor respond instantly, or because the economy did not respond instantly to the Bank's response. But after a year or two, if the Bank gets it right, those lags have passed, and output returns to potential, and inflation returns to target. But the policy rate of interest stays permanently lower than where it was in the past. There's a new normal for the policy rate. 3% is the new 5%.
And the big problem with a simple rule for monetary policy like the Taylor Rule, is that it can't handle cases like that. The Taylor Rule, at it's simplest, sets the current policy rate as a function of the current output gap and current inflation gap only. So, if it followed a simple Taylor Rule, the Bank in this example would only set a permanently lower policy rate if output were permanently below potential (which can't happen in standard models) or if inflation were permanently below target (which can happen in standard models). The simple Taylor Rule can't handle a new normal.
The above was an extreme example, where the drop in the natural rate was permanent. It was just for illustration, because it's simpler to tell the story in that case.
Suppose, more realistically, that the shock to the natural rate is temporary, but lasts a long time. Specifically, it lasts longer than the lags in the economy's response to the Bank plus the Bank's response to the shock. And suppose the Bank knows this. In that case, the Bank should cut the policy rate in response to the shock, and the economy will eventually respond, so output returns to potential, and inflation returns to target, after a temporary recession. But the policy rate should remain below normal for some time after output returns to potential and inflation returns to target. The policy rate should remain below normal until the natural rate returns to normal.
But if the Bank were following a simple Taylor Rule instead, this would not happen. The policy rate will only be below normal (the old normal) if output is below potential and/or inflation is below target. And unless the policy rate is below normal (the old normal) output will be below potential and/or inflation will be below target. So, with a simple Taylor Rule, output will be below potential and/or inflation will be below target for as long as the natural rate stays below normal (the old normal).
And that is precisely what the Bank's two pictures are showing.
And this is important because I think the Bank's assumptions are reasonable. The global economy has been hit with a serious shock to demand, that has lowered the natural rate, and this shock to the natural rate will probably last for some time.
Here's a crude model:
Let P(t) be inflation (or the deviation of inflation from target), Y(t) the output gap, R(t) the policy rate of interest, and N(t) the natural rate of interest. (Both R and N are in real terms, to keep the equations simple. I have also ignored a lot of constant terms and set most parameters to 1 to keep the equations simple.)
IS Curve: Y(t) = N(t) - R(t-1)
This is fairly standard, except I have introduced a 1-period lag in the economy's response to the Bank's policy rate, so the Bank can't perfectly stabilise the economy.
Phillips Curve: P(t) = Y(t) + 0.5 E(t-1)P(t) + 0.5 E(t-2)P(t)
This is fairly standard, if you assume something like overlapping 2-period nominal wage contracts. E(t-1)P(t) and E(t-2)P(t) mean the expectations at time t-1 and t-2 of the inflation rate at time t.
Shocks to natural rate: N(t) = S(t) + S(t-1)
S(t) is a serially uncorrelated shock, so the natural rate follows an MA(1) Moving Average process.
[Update: Curses! I think this should have been an MA(2) process. Make it N(t) = S(t) + S(t-1) + S(t-2) instead.]
Run this model through two alternative policy rules:
2a. R(t) = Y(t) + P(t)
This is a Taylor-like Rule. Remember R(t) is the *real* policy rate, so this satisfies the Taylor/Howitt principle that the nominal policy rate responds more than on-for-one with inflation.
2b. R(t) = EtN(t+1) + bEtP(t+1)
This is the optimal monetary policy rule in this model for keeping next period's inflation on target in this model. The Bank sets the (real) policy rate equal to its expectation of next period's natural rate. (The second term isn't really needed; it's just to rule out indeterminacy of the inflation rate and satisfy the Taylor/Howitt principle).
Now suppose that S(t) < 0. All other S shocks are zero. Given the MA(1) structure to the natural rate, this means the natural rate falls at time t and stays below normal for 2 periods in total, then returns to normal.
With policy rule 2a, the Taylor Rule, you get something roughly like the Bank's figure 2a. The economy has a 2-period recession, with inflation below target for 2 periods, and the (real) policy rate below normal for 2 periods.
With policy rule 2b, you get something roughly like the Bank's figure 2b. The economy has a 1-period recession, and inflation below target for 1 period, and the (real) policy rate below normal for 2 periods.
I think I've got that right. Not 100% sure. Never could do math. Somebody else can solve the model and check for me. It's not my comparative advantage.
Nick, this is just yet another reason why we all should stop judging monetary policy "tightness" based on a Taylor rule. Why not instead focus on monetary disequilibrium? That we of course only can observe indirectly through different asset prices and the financial markets, but the focus on Taylor rules in my view is extremely damaging and is likely to lead central banks from one policy mistake to another. In fact I really don't believe that there ever was such a thing as Taylor rules in the "real world". Central banks thought they were following a Taylor rule, but in fact the markets was always ahead of the central banks.
So to me the BoC seems a bit desperate for holding on to the old framework rather than to acknowledge that the Taylor rule is dead...it might not have abandoned central bankers, but central bankers should abandon it. In fact money supply growth adjusted for swings in velocity might do a better job - even though it is not necessary my preferred monetary instrument, but I have a hard time seeing that the Taylor rule is any better...
Posted by: Lars Christensen | July 21, 2011 at 11:32 AM
Lars: I lean towards agreeing with you. But I put aside my own perspective in writing this post, and looked at the world through the Bank of Canada's New Keynesian/Neo-Wicksellian eyes, in order to better understand and explain what they are saying.
Posted by: Nick Rowe | July 21, 2011 at 11:50 AM
I more or less guessed so Nick;-)
But it is interesting how the dominant New Keynesian regime for the first time in more than a decade is under real pressure. The first responds is to try to "reform" or modify the regime, but hopefully this kind of challenges will also lead to a more fundamental re-thinking of some of the perspectives of how we all think about economics. That said, I am not saying that there is no value in the New Keynesian regime and I am very sure that BoC’s research department has a very good understanding of the challenging facing monetary policy making and economic theory.
Posted by: Lars Christensen | July 21, 2011 at 12:40 PM
Good post. I've also argued that lower real and nominal rates are the new normal in America--even after (if?} the recession ends.
Posted by: Scott Sumner | July 21, 2011 at 12:57 PM
OK, but in a continuous time model, you'd never get a graph like that. The expected inflation function is obviously not analytical where it hits zero. I.e. there's a discontinuity in the derivatives (looks like the 2nd and higher) that lets the function suddenly stick at the target level. Since it's unlikely that there is any discontinuity in any underlying differential equation, and the expected path of the policy instrument appears to be perfectly smooth at that moment in time, there is no way inflation can suddenly be expected to stick at target. It must either converge gradually with the policy instrument (like in the first graph) or overshoot. But it can't just suddenly stick.
Posted by: K | July 21, 2011 at 01:04 PM
Very good.
Posted by: Sina Motamedi | July 21, 2011 at 02:47 PM
K:
[Engineering Geek]
At which we say it is within tolerance of being zero, shrug, and move on.
[/Engineering Geek]
Posted by: Determinant | July 21, 2011 at 03:05 PM
Lars: I agree again. But I am not sure how much pressure the NK paradigm is really under. The idea that monetary policy *is* interest rate policy, and that the monetary policy causal transmission mechanism can only work through interest rates, has a very powerful hold on our thinking. If you've always seen a duck, it's hard to switch to seeing a rabbit. Not every economist reads Scott ;-).
Thanks Scott and Sina!
K. You may well be right. But presumably there exists some time path for the natural rate (which might be discontinuous) that could justify the Bank's graphs. And since the Bank hasn't drawn the graph for the natural rate (the "headwinds"), it can make it anything it likes. But nobody is really going to be worried about the second derivatives of those curves.
Posted by: Nick Rowe | July 21, 2011 at 03:08 PM
Haha...Well Scott learned from best;-) (Or maybe you would say that Laidler and not Friedman is the best...)
But yes, it is incredible how it seems like it is impossible for most economics to understand the transmission mechanism like anything else than interest rates. And I even think Friedman and Brunner/Meltzer to some extent is to blame in the sense that they basically accepted IS/LM.
And one thing is economist - another financial journalists. They have never heard about any other transmission mechanism than the New Keynesian...
Posted by: Lars Christensen | July 21, 2011 at 03:23 PM
Nick, You said;
Not every economist reads Scott ;-).
Well there's our problem! :)
Posted by: Scott Sumner | July 21, 2011 at 03:31 PM
Nick & Scott...both of you should write more about Leland Yeager...and Clark Warburton
Posted by: Lars Christensen | July 21, 2011 at 03:35 PM
If the red line in Figure 2-B was instead 'Target O/R relative to the natural rate' and not 'relative to the long-run level', it would approach the zero line much faster?
Posted by: Mark | July 21, 2011 at 04:35 PM
Mark: Yes.
Posted by: Nick Rowe | July 21, 2011 at 04:37 PM
I use a forward looking monetary policy rule based on Clarida, Gali and Gertler (2000?)and I believe it gives the same answer as the BoC (not surprising since ToTEM uses a form of the same model, though with a higher smoothing parameter). Under that model there is a smoothing of interest rate movements and so rates won't adjust to neutral immediately when there is no longer an inflation or output gap as in a conventional Taylor rule.
Posted by: Brendon | July 21, 2011 at 09:58 PM
But doesn't that generate a path for inflation that overshoots the target?
Posted by: Stephen Gordon | July 21, 2011 at 10:12 PM
Perhaps, though with well anchored inflation expectations and a large output gap as starting point the risk of overshooting might be less of a concern than stimulating a very weak economy? (I don't really know, so feel free to shred that argument, I won't be offended).
Posted by: Brendon | July 21, 2011 at 10:29 PM
I'd also add that in my small model, core inflation hugs pretty close to target through 2012 (haven't extended it to 2013 yet) - though some of this is due to a high Canadian dollar doing some of the BoC's work for it and offsetting the inflationary impact of a shrinking output gap.
Posted by: Brendon | July 21, 2011 at 10:35 PM
The overshooting point was what made Figure 2-B look wrong to me. (Full disclosure: I couldn't figure out what was going on, so I asked if Nick could. And he did.)
Nick's conjecture seems to square it, though. If the zero line was the neutral rate, then we'd have Figure 2-A, or Figure 2-B with an inflation overshoot. But if the neutral rate is temporarily lower than the long-run neutral rate - which certainly seems plausible - then everything makes sense.
But it still seems to me that there must be a better way of communicating the point.
Posted by: Stephen Gordon | July 21, 2011 at 10:45 PM
I agree on the communications side. If the Bank is working with a neutral rate lower than the traditionally assumed 3-4% then what is the harm in explicitly saying so?
Posted by: Name_withheld | July 21, 2011 at 11:02 PM
Two comments:
First, expanding on Brendon's comment, does not the elevated Canadian dollar both dampen inflation as well as increase the output gap. A while back Nick presented a model where a X% appreciation in the currency was equivalent to a Y% increase in the overnight rate? Could such a construct be included in the model presented above? Or in the BoC analysis?
Second, is not the BoC handcuffed in their policy decisions by the Fed remaining at the zero-lower-bound? If the BoC starts normalizing interest rates before the Fed moves, will it not attract significant amounts of foreign capital into the Canadian bond and money markets ($11B came in in May alone http://www.statcan.gc.ca/daily-quotidien/110718/dq110718a-eng.htm). Increase inflows could cause significant appreciation of the Canadian dollar.
In a way, does not this technical note provide another reason for the BoC to not normalize rates while they await the Fed to finally move?
Posted by: Kosta | July 21, 2011 at 11:14 PM
Nick: "But nobody is really going to be worried about the second derivatives of those curves."
Yes, you all mock me! :-) But the coefficient of that second derivative is inertia. And inertia is what causes overshoot, and Stephen backed me up on the overshoot, so my intuition is not *that* goofy! But assuming that they have somehow stopped inflation at target, *then* I suppose I would agree that if the target instrument and the natural rate are not at that moment at equilibrium, then, given some dynamic of the natural rate, there exists a path of the target instrument that goes to equilibrium while controlling inflation exactly at target. And that path is *not* discontinuous. I.e. it involves the target instrument going smoothly (probably monotonically) to equilibrium, and does not jump straight to equilibrium.
Posted by: K | July 21, 2011 at 11:19 PM
How about some clerk at the Bank got lazy and just redrew a graph with a pain program to make a point and left a bit of a disjoint in?
Posted by: Determinant | July 22, 2011 at 12:33 AM
Why do we all find this discussion interesting? I guess Axel Leijonhufvud in:
1979...http://www.econ.ucla.edu/alleras/teaching/life_among_the_econs_leijonhufvud_1973.pdf
;-)
Posted by: Lars Christensen | July 22, 2011 at 07:17 AM
Brendon: Suppose you take a model where the Bank is targeting the inflation forecast, and then modify it by adding some sort of smoothing rule. It will then take longer for the Bank to cut the policy rate when the natural rate falls unexpectedly, so inflation will initially fall below target by a larger amount than it otherwise would. And then (if the natural rate eventually returns to the old normal just as the Bank expects), there will be a slight overshoot of inflation because the Bank is unwilling to raise the policy rate quickly enough. (What Stephen said).
Stephen and Name-withheld: I personally found the Bank's communication very clear. But you are not alone in finding it unclear. A lot of the CD Howe MPC people found it unclear too. I can't quite puzzle this out. Why did it communicate clearly to me, but not to others?
Here's my guess:
Draw an IS curve. Draw a vertical line at potential output, Y*. Draw a horizontal line at r* where the IS curve cuts the vertical line. r* is then the natural ("neutral") rate of interest. If you think about it this way, then any shift in the IS curve can be seen as a change in the natural rate of interest. And the job of an inflation targeting central bank is to keep the policy rate equal to the natural rate (unless there are shocks to the Phillips Curve, which create a trade-off between keeping output at potential and inflation on target).
Thinking about it this way, it was natural for me to translate "headwinds" into "a fall in the natural rate". I think others translated 'headwinds" into "output below potential", or something.
Dunno.
Where this all fails, of course, is in ignoring the exchange rate. In a SOE, there's a division of labour between the interest rate and exchange rate in acting as the shock-absorber. My own theory on that division of labour is that the interest rate handles the transitory component of IS shocks, and the exchange rate handles the permanent component of IS shocks (both relative to the ROW). But bringing the exchange rate into that box would really have complicated the Bank's communication problem.
Aha! Which leads me straight into Kosta's comment!
Kosta: basically, yes, yes, and yes, to your 3 paragraphs. I could bring the Fed and the exchange rate into the picture, along the lines of my transitory vs permanent components sketched above. But it's too hot here today, and I don't feel up to the task! If demand is weak in the US, relative to Canada (which it is), and if this is expected to remain the case for some time, but not permanently, then the Bank should allow the exchange rate appreciation to do most (not all) of the job initially, and then let the interest rate take over more and more of the job as we approach the time at which US demand recovers relative to Canada. Which is what has been happening, so far.
K: your intuition for inflation overshooting is different from Stephen's reasoning. But, it's not exactly *inflation* inertia you are assuming. Price level inertia means the price level doesn't want to jump. Take a derivative: Inflation inertia means the inflation rate doesn't want to jump. Take a second derivative, and we get what you are assuming. I can't think of a name for it. "Acceleration inertia"?
Posted by: Nick Rowe | July 22, 2011 at 08:05 AM
Nick: Yes, by 'headwinds', I thought the bank was talking about the output gap staying below potential. And this didn't make sense to me, because the gap is shrinking fast, and the Bank said nothing about revising its estimates for potential or its forecasts for GDP growth.
That graph made it even more confusing, because we often treat 'long-run rate' and 'neutral rate' as interchangeable expressions.
Posted by: Stephen Gordon | July 22, 2011 at 08:21 AM
Stephen. Aha! That explains the Bank's communication failure!
Posted by: Nick Rowe | July 22, 2011 at 08:35 AM
Basically they could have replaced the whole text box with a 20 point bold font line that reads 'THE NATURAL RATE OF INTEREST =/ THE LONG-RUN LEVEL. LOOK IT UP.'
Posted by: Mark | July 22, 2011 at 11:34 AM
Nick, the Taylor rule is known as a proportional control law in engineering. It's a well established principle that under proportional control the error (the output and inflation gaps) will not converge to zero unless the gains--the scaling constants in the Taylor rule--need to be exactly a specific value.
Posted by: Jon | July 22, 2011 at 11:41 AM
Mark: Yes. Or they could have said "the natural rate of interest isn't a God-given fundamental constant; it moves around".
That would have been clearer to economists, but less clear to journalists, who are probably quite happy with the concept of "headwinds". "When you are driving and hit an unexpected headwind, your speed will drop, so you have to put the gas pedal down to get back to 100km/h. But even when you get back to 100km/h, you can only ease up on the gas pedal a bit. You have to keep the gas pedal pressed down lower than normal until the headwind subsides".
Jon: good comparison. And that specific value would be the natural rate, which moves around.
But if you added a lagged interest rate(s) to the Taylor Rule R(t) = bR(t-1) + P(t) + Y(t) it would help it converge. Depends on the degree of serial correlation in shocks to the natural rate. With permanent shocks in the natural rate, you would need b=1. With temporary shocks, b less than 1. I have always thought that that argument for a lagged interest rate in the Taylor Rule made more sense than the interest rate smoothing argument. You don't directly observe the natural rate, so something like this would help.
Posted by: Nick Rowe | July 22, 2011 at 12:09 PM
I now think my above response to K was incorrect. K *is* assuming inflation inertia.
Posted by: Nick Rowe | July 22, 2011 at 12:26 PM
"Suppose there is a shock to Aggregate Demand that lowers the natural rate of interest."
How about supposing there is a shock to Aggregate Supply that lowers the natural rate of interest?
Posted by: Too Much Fed | July 22, 2011 at 12:56 PM
You could interpret a currency appreciation that way.
Posted by: Stephen Gordon | July 22, 2011 at 01:07 PM
TMF: since output in my graphs and model is defines as the gap between output and potential output, the analysis would be exactly the same whether it was a demand shock, or a supply shock that increased potential output and lowered the natural rate.
Posted by: Nick Rowe | July 22, 2011 at 01:39 PM
Nick, Jon is right. This entire post consists of talking about implementing a Proportional-Integral-Derivative control in real life via interest rates to control the economy. Much as that is extremely complex, the models presented here are exceedingly well known from Control Theory. All the stability criteria are well-studied and stability is the key questions in "will it work"?
Have any economists wandered over to Carleton's Engineering faculty and taken some control theory courses? If not I strongly suggest they do, it would make many of these conversations a lot easier. Or pick up a Control Theory textbook.
Why reinvent the wheel when the wheel already exists, it's just over on a different shelf?
Posted by: Determinant | July 22, 2011 at 02:24 PM
We do teach dynamic programming and optimal control theory. But the Taylor Rule isn't really a solution to that kind of a problem; it's more of an ad-hoc rule of thumb.
Posted by: Stephen Gordon | July 22, 2011 at 02:39 PM
It's still being discussed in control theory terms and being modelled that way. Even if it's just an observation and not formally proven, the language and concepts of control theory would be extremely useful to get everyone on the same page and to identify the key concepts.
It would greatly help to get to the root of "what's wrong here, compared to what we know to be necessary for stability? How does this formulation depart from the standard model and what does that mean?"
Several of Nick's recent posts would have benefited from some control theory explanations. Such as the time he created a look-ahead model that wasn't causal.
Posted by: Determinant | July 22, 2011 at 03:53 PM
Not sure what purpose would be served on by employing technical terms few understand. We already have journals for that.
Posted by: Stephen Gordon | July 22, 2011 at 04:46 PM
In some sense that isn't really responsive. I don't think a discussion of optimal controllers is really appropriate here. It's obvious that building an accurate process model--whether black box or white box--simply isn't tractable, and you elided from 'control theory' to 'optimal control theory' so fast that I'm not the nuance is clear to everyone.
The gains used in the taylor equation are ad hoc, but most of control theory is about analysis not about designing a controller in the sense of a LQR. Those ideas came in the 60s and 70s.
Its reasonable, though, to expect economists to move from the 1800s to the 1930s in their vocabulary and conceptual grounding. Unlike Determinant, I don't feel like the stability results matter here but the language and intuition does. This was a post about economists getting wrong-footed--and its common to hear the taylor rule referred to as an 'optimal policy rule'. That seems to me to hint at a problem in the economics community. One which could be related to the attitude, "its just some technical terms".
Posted by: Jon | July 23, 2011 at 01:21 AM
Stephen:
As a physicist who switched to economics, I am sympathetic to Determinant's position: a)sometimes we need to master the vocabulary and
b) if we are within tolerance, declare the thing good enough for gunmint work and move on.
I am always surprised by the insistence on precise results when I remember how at Laval 35 years ago we were taught the wonders of back of the enveloppe computations ( which always alarmed one of my primo at the National Assembly in QC City in 81-82 who, as a former nuclear engineer, saw a +- 10% margin with rather less equanimity than me...)
As a former economics journalist, I also understand that most of them don't even reason in NK or whatever terms. They have a vague mixture of economics-as-taught in Introduction to business vocabulary taught in Journalism Dept by some reject from Econ, mixed with whatever confused idiocy they overheard at the Chamber of Commerce rubber chicken dinner.
I'll always remember that day in the summer of 1980 when my section chief at a large daily asked me to make a graph of the discount rate vs prime rate to confirm the amazing insight he just had in a blinding vision: there was a link between the two. As Nick would have said C Oh C...
The Central banks will always have a Cool Hand Luke failure to communicate.
Posted by: Jacques René Giguère | July 23, 2011 at 02:45 AM
Why do we need to explain this in terms of a long-term depressed neutral rate of interest? Can't we just explain it via expectations? Many/most of the effects of the BoC's interest rate changes should be happening before the changes themselves, because they are anticipated. Thus, we'd expect the BoC's actual operations to have to persist long after the problem appears solved, otherwise they would sabotage those expectations.
Posted by: Alex Godofsky | July 23, 2011 at 05:18 AM
I have found myself sometimes thinking in control-theory ways, over the last dozen years. Readers of this blog will be familiar with thermostat analogies, etc. But my knowledge of the sort of control theory understood by engineers is casual -- what I have picked up from various random sources, including comments on this blog. Learning more control theory from engineers would probably be a good thing for me, but so would learning a lot of other things from a lot of other people. And sometimes it's just easier to re-invent the wheel, because that way you really understand it, and also build a wheel that works best for the special purposes you need it.
There is one particular application of control theory I've been working on for the last dozen years on and off. I've talked about it before on this blog, but haven't really gotten the control-theorists to understand what I'm doing, or trying to do. Let me briefly describe it:
Inflation is determined by a black box. All we know is that if we push the control level (the policy rate) down, inflation will increase, with a lag. But there are 101 other possible variables we can look at that might also affect inflation, also with a lag. And the problem is to build a thermostat that can keep future inflation as close as possible to 2%, by responding in the right way and in the right amount, not just to past inflation, but to those 101 other possible indicators. But we don't know what those right ways and right amounts are. And my idea is that we build an initial crude thermostat, which we know will make mistakes, but which can look at its own past history of mistaken responses and improve its own future performance by adjusting its own responsiveness. For example, if the thermostat noticed that, in the past, an increase in oil prices was followed by inflation rising above 2% in future, the thermostat would figure that it hadn't responded strongly enough to oil prices in the past, and so would respond more strongly in future. In other words, each of the parameters of the thermostat has its own thermostat that can adjust that parameter in response to past correlations showing systematic mistakes. If you can forecast future inflation using those correlations by looking at current or past inflation, current or past policy rate, and current or past 101 other indicators, you know the thermostat is making consistent mistakes and you can learn from those consistent mistakes and re-build the main thermostat so it works better. It's a thermostat with a little econometrician inside it. A meta thermostat.
Do any of you control theorists understand what the hell I'm talking about? If not, I have to keep on inventing my own wheel, my own way.
I need to learn more about that integrative/derivative/level whatever distinction (though I think I've got the gist of it).
One difference between engineering and economics is that our black boxes have expectations. This is what lead to the Kydland Prescott paper on time-consistency of optimal plans. The thing it's optimal to promise people you will do in future will not generally be optimal to do when the future arrives.
The Bank of Canada has done some interesting work on the Taylor Rule as a good-enough control system for a black box. They took the Taylor Rule, plus a few other policy rules that were optimal in a particular model, and ran a horse race to see which rule was most robust when you ran it through a dozen different models.
Posted by: Nick Rowe | July 23, 2011 at 06:25 AM
Alex: you lost me a bit there.
Posted by: Nick Rowe | July 23, 2011 at 06:32 AM
Nick:
Your idea is an Adaptive Filter. http://en.wikipedia.org/wiki/Adaptive_filter, a filter that changes its transfer function based on past performance.
As that article states, most adaptive filters are digital because you need memory in order to make them work and computer code allows sufficient adaptability to be implemented.
As Jon and Jacques said it would be nice if economists could master the vocabulary. Then the rest of us could throw in our two bits on how efficient the model really is.
A good formal Control Theory grounding gives you the intuition to know what will work and what doesn't and where you need to go from here.
Posted by: Determinant | July 23, 2011 at 01:37 PM
Okay, the disclaimer here is I don't actually any macro, I just read Scott's blog. So thinking about monetary policy in terms of interest rates has never made sense to me; I think about it in terms of OMOs, because it seems a lot clearer how those actually affect things.
So, the story is that demand for CAD rises and is not (immediately) met by increased supply by the BoC, leading to a shortfall in AD. The BoC decides to do C$X of QE over a period of a year. Expectations respond immediately, and the actual economy recovers fairly quickly because C$X happened to be roughly the right amount. If the BoC then says "oh look things are fine, I guess we don't have to do QE anymore" then clearly the economy would go right back into the recession.
Does this same logic work for interest rates?
Posted by: Alex Godofsky | July 23, 2011 at 02:20 PM
That should be "don't actually know any macro".
Posted by: Alex Godofsky | July 23, 2011 at 02:26 PM
Determinant: That "Adaptive Filter" idea does sound rather like what I am thinking of. Not sure that I quite get what a "filter" is, however. Is a house thermostat a (crude) filter? Would a more complicated thermostat, that also responded to other variables like windspeed and outside temperature be a "filter"?
Alex: I follow you now. Yes, roughly the same logic works for interest rates.
Posted by: Nick Rowe | July 23, 2011 at 10:02 PM
A thermostat is a control, but controls and filters are deeply related. They are taught side-by-side. Controls have feedback, filters generally do not. Filters remove a part of a signal, controls compare two signals and generate a feedback based on the comparison.
A thermostat is basic closed-loop control.
Both signals and filters are customarily analyzed in the frequency domain (j-omega) or the state-space domain (s).
Both filters and controls are based on transfer functions and getting the transfer function right based on what components you have and what you want to do is what control theory is all about.
To be really, really, really basic, a control is a filter with extra feedback and wiring.
Posted by: Determinant | July 23, 2011 at 10:23 PM
Here you go, http://en.wikipedia.org/wiki/Adaptive_control
All about adaptive control.
Of the top of my head, the Taylor Rule is a Model Reference Adaptive Control.
See the intro to that at http://www.pages.drexel.edu/~kws23/tutorials/MRAC/MRAC.html
I love that diagram. It's delightful. Nick, you just made my day. :)
Sorry, junkie just got his fix. ;)
Posted by: Determinant | July 23, 2011 at 10:44 PM
Determinant:
Aha! That looks like it!
"Of the top of my head, the Taylor Rule is a Model Reference Adaptive Control."
Nope. The Taylor Rule is non-adaptive control law. It's exactly like a standard mechanical house thermometer, or an engine governor, except that it looks at 2 variables (output and inflation) rather than just the house temperature. There are two parameters in a Taylor Rule, and one is set at 1.5 and the other is set at 0.5. They don't adapt. They aren't adjusted in the light of what we learn about the system while it is being controlled, or if the system changes over time.
What I was trying to do was adaptive control. I didn't know that was what it was called. And nobody understood me, sniff!. (Actually, some economists did understand what I was getting at.)
In my approach, you start with something like the Taylor Rule, then watch for systematic errors, then adjust that 1.5 and 0.5 slowly up or down depending on the systematic errors you observe. The aim is that eventually it converges on the right control law (or close to right, if the system doesn't change too quickly over time).
But the math and terminology of that Wikipedia post are beyond me, I think.
Posted by: Nick Rowe | July 23, 2011 at 11:34 PM
Nick: let's say you build your model. If it forecasts inflation better than the tips spread why don't you start a hedge fund? Shouldn't the CB just target the tips spread, or do you figure they should be able to beat the market like a successful hedge fund?
Posted by: K | July 23, 2011 at 11:54 PM
K: doesn't that lead to the circularity problem?
Posted by: Alex Godofsky | July 24, 2011 at 12:36 AM
K:
Forget the Bank of Canada for a minute. Let's think about EMH. I think EMH is neither true nor false. Traders make forecasts, and prices depend on traders' forecasts. Then next period other traders look back over the history of prices, and try to spot systematic errors in past forecasts, and if they find some they adjust their forecasting models, and the pattern of prices changes in response to those changing forecast models. And repeat, forever, while the underlying structure of the world changes, both for exogenous reasons, and because traders' forecast models are changing. EMH is more of a process than a statement about how the world is.
The Bank of Canada is similar. It targets its own forecast of 2-year ahead inflation. And the pattern of inflation we see depends on the Bank's decision rule, which depends on its own forecasting model. My idea is that it starts out with some decision rule, then next period look back over the history of inflation, to see if 2 year ahead inflation could have been predicted, to try to spot systematic errors in its past decision rule, then adjust its decision rule accordingly, then repeat.
Both can be seen as cases of adaptive learning. The second is also adaptive control.
Yes, if the TIPS spread is an efficient predictor of future inflation, and if the market learns faster than the Bank, you might argue that the Bank should just leave the learning to the market, and target the market forecast. Circularity problem aside, I'm agnostic on that question. I was just taking as given that the Bank targets its own forecast, not the market forecast.
Posted by: Nick Rowe | July 24, 2011 at 07:12 AM
The fundamental questions are:
1. Has the neutral rate fallen?
2. If so, what factors would explain its fall?
If everyone believes that neutral is still around 4%, then simple policy rules would dictate that the BoC should have continued with the overnight rate increases that began last year. However, if neutral is now at 2% due to foreign headwinds, then the BoC is justified in keeping the overnight rate lower for a longer period of time.
The challenge for economists is to quantify the impact of these headwinds on the neutral rate; the challenge for the Bank's communication department is to explain all this to the public without fully defining what the neutral rate is, where it was in the past, where it is today, and where it is likely to be in the future.
Posted by: Greg Tkacz | July 24, 2011 at 11:28 AM
Nick, you're breaking my heart! You have a PhD, I have faith in your ability to master the math!
Anyway, it all begins with a little simplification. All those s equations are Laplace Transformations. Differentiation is given a differential operator s and integration is defined as 1/s. Then you though in the initial conditions and you get an equation.
PID controllers imply s^2 as the highest dimension in the equation.
Maybe next summer you could hire an Engineering grad student to do the math for Nick's Adaptive Inflation Controller (Patent Pending). Come on, there has to be at least a few papers in that.
Posted by: Determinant | July 24, 2011 at 01:37 PM
Determinant: thanks for your faith in me! But I have to confess you lost me at "Laplace Transformations". I don't know what they are.
Posted by: Nick Rowe | July 24, 2011 at 10:36 PM
Greg:
1. If you believe the IS curve has shifted left, and the LRAS curve has not shifted (or shifted right), then I would say that by definition the natural rate has fallen.
2. That's like asking "what caused the recession?". At least, within the New Keynesian/Neo-Wicksellian paradigm followed by the Bank.
I agree with the rest of your comment.
Posted by: Nick Rowe | July 24, 2011 at 10:41 PM
*Determinant's head hits keyboard*
Nick, Laplace Transforms allow you to linearize differential equations and solve them using ordinary algebra and a transform table!
How could you even look at a differential equation and not think of using them? Dear me, what happened to your math education?
http://en.wikipedia.org/wiki/Laplace_transform
http://en.wikipedia.org/wiki/Laplace_transform_applied_to_differential_equations
The Laplace Transform turns integration and differentiation into division and multiplication.
*Determinant goes over to couch to lie down and recover.*
Posted by: Determinant | July 24, 2011 at 10:59 PM
"Jacques pats Determinant's head with a cold towel as he remember how in 1975 first-term Calculus for economists 1 he learned Laplace Tranforms but then Laval was a math hotbed. Jacques ponders how he was a soldier once and young, or at least a physicist fresh from Mathematical Physics 2 and thought that economists had it easy as they didn't have to go through Kreyszig's Advanced Engineering Mathematics ..."
Jacques also thinks that he should stop thinking about Laplace transforms and should finish booking his trip to the U.S. Air Force Museum as even Ken Kesey likes to look at a few hundreds pieces of good engineering.
Posted by: Jacques René Giguère | July 25, 2011 at 02:26 AM
Okay, okay but you do know the z-transform right?
Well... usually that handled with the "I", the integrator. So given the premise that some coefficients in the taylor rule will give a steady-state error of zero, you'll find it, but so would simply integrating the output and inflation gap, so that persistent errors produced stronger policy responses.
The gains are really selected on rather different grounds:
- Stronger gains tend to mean a faster settling time. i.e., how quickly following a disturbance does the system return to the targets
- Stronger gains eventually lead to overshoot
- Time lags (such as in computing output even if you believe policy is instantaneous) mean some gains (too weak will produce instability and some gains (too strong) will produce instability.
Andy Harless wrote:
Control theory intuition neatly explains why the gains in the taylor equation cannot be too large.
It also suggests that performance would be improved with an integral term. That's again the "I" and its what Andy's getting at without knowing the language when he wrote:
Interesting, this could be more effective in the economic context because of expectations.
Posted by: Jon | July 25, 2011 at 02:38 AM
Jacques: thanks for those memories. They give me solace. I took math A-level at school in England. And got a grade of D. But then it was the early 1970's, and a lot of other distractions were happening for a teenage boy. Then a quick one-term math course for incoming MA Economics students at UWO, where I first learned what a matrix was, and how to calculate a determinant (though I have forgotten since). I've been faking it since then.
Jon: "Okay, okay but you do know the z-transform right?"
Nope. Never heard of it. Or maybe I have, but have forgotten. And I can't understand those Wiki pages Determinant helpfully linked for me.
"Well... usually that handled with the "I", the integrator. So given the premise that some coefficients in the taylor rule will give a steady-state error of zero, you'll find it, but so would simply integrating the output and inflation gap, so that persistent errors produced stronger policy responses."
That's different. Suppose the Taylor Rule says R(t) = 5% + 1.5(deviation of inflation from target) + 0.5(deviation of output from potential). And suppose the bank is targeting its forecast of 2-year ahead inflation, trying to keep it at a constant 2%.
If you find that inflation was on average below the 2% target, when that Taylor Rule was being followed, then you know the constant term, 5%, is too high. So you slowly revise that 5% down.
Suppose instead you noticed a positive historical correlation between the output gap and 2-year ahead inflation. That is telling you that the 0.5 coefficient is too small, and should be adjusted up to 0.6, or something. (And if it's a negative correlation, that is telling you that the 0.5 coefficient is too large, and should be adjusted down to 0.4).
And if you noticed a positive correlation between current inflation and 2 year ahead inflation, that is telling you that the 1.5 coefficient is too small, and should be adjusted up.
And if you noticed a positive/negative historical correlation between X and 2 year ahead inflation, that means you should add X to the Taylor Rule, with a small positive/negative coefficient. (Assuming the Bank observes X).
If anyone is *really* into this, here's my old stuff:
economics.ca/2003/papers/0266.pdf
Posted by: Nick Rowe | July 25, 2011 at 08:51 AM
My point was slightly different which comes down to: there are simpler
Methods in common use.
I agree that what you want is different. What I want to know is why you think it's better ( which it may well be )
Posted by: Jon | July 25, 2011 at 10:14 AM
Jon: OK. Let me first try to understand better your "simpler methods in common use".
In order to handle the case where the constant term is too big (the 5% should be 4%, so we get inflation on average below target), I can think of two different versions of that "simpler method":
1. Change the Taylor Rule to: R(t) = 5% + b[R(t-1)-5%] + 1.5(inflation gap) + 0.5(output gap)
2. Change the Taylor Rule to R(t) = 1.5(inflation gap) + 0.5(output gap) + b(integral of inflation gap over time) (b less than or maybe equal to one)
1 is what the central bank seems to do in practice. It has been rationalised as "interest rate smoothing", but I see it more as a way for the Bank to adapt towards a natural rate that is moving over time in some sort of persistent way.
2 seems to me to end up in price level targeting, which might be a good policy, but if what you are trying to do is inflation targeting, this doesn't seem the right solution.
Plus, neither of those two methods seem to deal with the case (the one I concentrate on) where the coefficients 1.5 or 0.5 are wrong, so the Bank is over-reacting or under-reacting to fluctuations in inflation or output.
Posted by: Nick Rowe | July 25, 2011 at 10:56 AM
The scale factor of the integral term can be above one just as the constants by the proportional terms need not sum to 2 or 1 or any other value you may pick.
Second whether this behaves as price level targeting will depend on the magnitude of the proportional and integral terms. Indeed, I'll claim that price level targeting is equivalent to omitting the proportional terms.
What this means is, in part, that level vs rate targeting is a false dichotomy.
The integral term will always drive the steady state error to zero. Now I suspect that's not what you mean by over an under reacting.
Perhaps what you have in mind are the two ideas I mentioned before namely how quickly the policy returns us to equilibrium after a disturbance and whether it does so with overshoots.
So which of those ideas do you have in mind?
Posted by: Jon | July 25, 2011 at 12:04 PM
Jon: Let me try to formalise my ideas a bit more.
Let P(t) be inflation, R(t) the policy rate chosen by the Bank, and I(t) the Bank's information set at time t. Assume perfect memory by the Bank, so I(t) includes I(t-1) etc. I(t) also includes R(t). (I.e. the Bank knows what policy rate it is setting, and has set in the past.)
The Bank has a reaction function F(.) that sets R(t) as a function I(t). R(t) = F(I(t)).
The Bank is targeting 24 month ahead inflation. It wants inflation to return to the 2% target 24 months ahead, and stay there. (It fears there will be bad consequences if it tries to get inflation back to target more quickly than 24 months ahead). But there will be shocks that it can't anticipate, so it knows that future inflation will fluctuate. Instead, the job is to choose some function F(.) such that:
E(P(t+24) conditional on I(t) and F(.)) = 2%
(And E(P(t+25) conditional on I(t) and F(.)) = 2% etc.)
Suppose that the Bank has been using some reaction function F(.) for the last 20 years. Looking back on the last 20 years of data, we estimate a linear regression of P(t)-2% on I(t-24). We exclude R(t-24) from the RHS that regression, because leaving R(t) in would cause perfect multicolinearity between R(t) and the other elements of I(t). If the Bank has chosen F(.) correctly, the estimate from that regression should be statistical garbage. All the parameters -- the constant and all the slope terms -- should be insignificantly different from zero.
If that linear regression is not garbage -- if some of the parameters are non-zero -- we know the Bank has made systematic mistakes over the past 20 years.
If the constant is non-zero, we know the constant term in F(.) is wrong. If the slope on some variable X(t) within I(t) is non-zero, we know the coefficient on X(t) in F(.) is wrong. The Bank either over-reacted (the coefficient on X(t) was too large) or underreacted (the coefficient on X(t) was too small) to changes in X.
We could also estimate a regression of P(t)-2% on R(t-24). That regression should also be garbage, if the Bank has chosen F(.) correctly. If the estimated slope coefficient in that equation is non-zero, that tells us whether the Bank overreacted or underreacted to fluctuations in everything else in I(t).
Posted by: Nick Rowe | July 25, 2011 at 02:11 PM
Yes... I understand your view here.
But, suppose: its not generally possible for a causal system in the presence of lagged information to exhibit the property you mention, no matter how much you tune those constants.
I assert that the reason this must be so is that following a disturbance, you must react exactly strong enough to reject the disturbance in zero time and then go slack. A time lag represents a filter--events higher frequency then the lag are not visible. So you cannot have a step-response as would be necessary for there to be no systematic mistakes--since the fourier transform of such a step-response has frequency components higher than the lag.
So does your scheme have a unique solution at least? (or is there a family of solutions which achieves some lowest possible correlation--I think this more likely)
Posted by: Jon | July 26, 2011 at 01:35 AM