« Health Outcomes & Health Financing: An Example | Main | Bulgarian snapshots: An economist on holiday »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Nick, this is just yet another reason why we all should stop judging monetary policy "tightness" based on a Taylor rule. Why not instead focus on monetary disequilibrium? That we of course only can observe indirectly through different asset prices and the financial markets, but the focus on Taylor rules in my view is extremely damaging and is likely to lead central banks from one policy mistake to another. In fact I really don't believe that there ever was such a thing as Taylor rules in the "real world". Central banks thought they were following a Taylor rule, but in fact the markets was always ahead of the central banks.

So to me the BoC seems a bit desperate for holding on to the old framework rather than to acknowledge that the Taylor rule is dead...it might not have abandoned central bankers, but central bankers should abandon it. In fact money supply growth adjusted for swings in velocity might do a better job - even though it is not necessary my preferred monetary instrument, but I have a hard time seeing that the Taylor rule is any better...

Lars: I lean towards agreeing with you. But I put aside my own perspective in writing this post, and looked at the world through the Bank of Canada's New Keynesian/Neo-Wicksellian eyes, in order to better understand and explain what they are saying.

I more or less guessed so Nick;-)

But it is interesting how the dominant New Keynesian regime for the first time in more than a decade is under real pressure. The first responds is to try to "reform" or modify the regime, but hopefully this kind of challenges will also lead to a more fundamental re-thinking of some of the perspectives of how we all think about economics. That said, I am not saying that there is no value in the New Keynesian regime and I am very sure that BoC’s research department has a very good understanding of the challenging facing monetary policy making and economic theory.

Good post. I've also argued that lower real and nominal rates are the new normal in America--even after (if?} the recession ends.

OK, but in a continuous time model, you'd never get a graph like that.  The expected inflation function is obviously not analytical where it hits zero. I.e. there's a discontinuity in the derivatives (looks like the 2nd and higher) that lets the function suddenly stick at the target level.  Since it's unlikely that there is any discontinuity in any underlying differential equation, and the expected path of the policy instrument appears to be perfectly smooth at that moment in time, there is no way inflation can suddenly be expected to stick at target.  It must either converge gradually with the policy instrument (like in the first graph) or overshoot.  But it can't just suddenly stick.  

Very good.

K:

[Engineering Geek]

At which we say it is within tolerance of being zero, shrug, and move on.

[/Engineering Geek]

Lars: I agree again. But I am not sure how much pressure the NK paradigm is really under. The idea that monetary policy *is* interest rate policy, and that the monetary policy causal transmission mechanism can only work through interest rates, has a very powerful hold on our thinking. If you've always seen a duck, it's hard to switch to seeing a rabbit. Not every economist reads Scott ;-).

Thanks Scott and Sina!

K. You may well be right. But presumably there exists some time path for the natural rate (which might be discontinuous) that could justify the Bank's graphs. And since the Bank hasn't drawn the graph for the natural rate (the "headwinds"), it can make it anything it likes. But nobody is really going to be worried about the second derivatives of those curves.

Haha...Well Scott learned from best;-) (Or maybe you would say that Laidler and not Friedman is the best...)

But yes, it is incredible how it seems like it is impossible for most economics to understand the transmission mechanism like anything else than interest rates. And I even think Friedman and Brunner/Meltzer to some extent is to blame in the sense that they basically accepted IS/LM.

And one thing is economist - another financial journalists. They have never heard about any other transmission mechanism than the New Keynesian...

Nick, You said;

Not every economist reads Scott ;-).

Well there's our problem! :)

Nick & Scott...both of you should write more about Leland Yeager...and Clark Warburton

If the red line in Figure 2-B was instead 'Target O/R relative to the natural rate' and not 'relative to the long-run level', it would approach the zero line much faster?

Mark: Yes.

I use a forward looking monetary policy rule based on Clarida, Gali and Gertler (2000?)and I believe it gives the same answer as the BoC (not surprising since ToTEM uses a form of the same model, though with a higher smoothing parameter). Under that model there is a smoothing of interest rate movements and so rates won't adjust to neutral immediately when there is no longer an inflation or output gap as in a conventional Taylor rule.

But doesn't that generate a path for inflation that overshoots the target?

Perhaps, though with well anchored inflation expectations and a large output gap as starting point the risk of overshooting might be less of a concern than stimulating a very weak economy? (I don't really know, so feel free to shred that argument, I won't be offended).

I'd also add that in my small model, core inflation hugs pretty close to target through 2012 (haven't extended it to 2013 yet) - though some of this is due to a high Canadian dollar doing some of the BoC's work for it and offsetting the inflationary impact of a shrinking output gap.

The overshooting point was what made Figure 2-B look wrong to me. (Full disclosure: I couldn't figure out what was going on, so I asked if Nick could. And he did.)

Nick's conjecture seems to square it, though. If the zero line was the neutral rate, then we'd have Figure 2-A, or Figure 2-B with an inflation overshoot. But if the neutral rate is temporarily lower than the long-run neutral rate - which certainly seems plausible - then everything makes sense.

But it still seems to me that there must be a better way of communicating the point.

I agree on the communications side. If the Bank is working with a neutral rate lower than the traditionally assumed 3-4% then what is the harm in explicitly saying so?

Two comments:

First, expanding on Brendon's comment, does not the elevated Canadian dollar both dampen inflation as well as increase the output gap. A while back Nick presented a model where a X% appreciation in the currency was equivalent to a Y% increase in the overnight rate? Could such a construct be included in the model presented above? Or in the BoC analysis?

Second, is not the BoC handcuffed in their policy decisions by the Fed remaining at the zero-lower-bound? If the BoC starts normalizing interest rates before the Fed moves, will it not attract significant amounts of foreign capital into the Canadian bond and money markets ($11B came in in May alone http://www.statcan.gc.ca/daily-quotidien/110718/dq110718a-eng.htm). Increase inflows could cause significant appreciation of the Canadian dollar.

In a way, does not this technical note provide another reason for the BoC to not normalize rates while they await the Fed to finally move?

Nick: "But nobody is really going to be worried about the second derivatives of those curves."

Yes, you all mock me! :-) But the coefficient of that second derivative is inertia. And inertia is what causes overshoot, and Stephen backed me up on the overshoot, so my intuition is not *that* goofy! But assuming that they have somehow stopped inflation at target, *then* I suppose I would agree that if the target instrument and the natural rate are not at that moment at equilibrium, then, given some dynamic of the natural rate, there exists a path of the target instrument that goes to equilibrium while controlling inflation exactly at target. And that path is *not* discontinuous. I.e. it involves the target instrument going smoothly (probably monotonically) to equilibrium, and does not jump straight to equilibrium.

How about some clerk at the Bank got lazy and just redrew a graph with a pain program to make a point and left a bit of a disjoint in?

Why do we all find this discussion interesting? I guess Axel Leijonhufvud in:
1979...http://www.econ.ucla.edu/alleras/teaching/life_among_the_econs_leijonhufvud_1973.pdf

;-)

Brendon: Suppose you take a model where the Bank is targeting the inflation forecast, and then modify it by adding some sort of smoothing rule. It will then take longer for the Bank to cut the policy rate when the natural rate falls unexpectedly, so inflation will initially fall below target by a larger amount than it otherwise would. And then (if the natural rate eventually returns to the old normal just as the Bank expects), there will be a slight overshoot of inflation because the Bank is unwilling to raise the policy rate quickly enough. (What Stephen said).

Stephen and Name-withheld: I personally found the Bank's communication very clear. But you are not alone in finding it unclear. A lot of the CD Howe MPC people found it unclear too. I can't quite puzzle this out. Why did it communicate clearly to me, but not to others?

Here's my guess:

Draw an IS curve. Draw a vertical line at potential output, Y*. Draw a horizontal line at r* where the IS curve cuts the vertical line. r* is then the natural ("neutral") rate of interest. If you think about it this way, then any shift in the IS curve can be seen as a change in the natural rate of interest. And the job of an inflation targeting central bank is to keep the policy rate equal to the natural rate (unless there are shocks to the Phillips Curve, which create a trade-off between keeping output at potential and inflation on target).

Thinking about it this way, it was natural for me to translate "headwinds" into "a fall in the natural rate". I think others translated 'headwinds" into "output below potential", or something.

Dunno.

Where this all fails, of course, is in ignoring the exchange rate. In a SOE, there's a division of labour between the interest rate and exchange rate in acting as the shock-absorber. My own theory on that division of labour is that the interest rate handles the transitory component of IS shocks, and the exchange rate handles the permanent component of IS shocks (both relative to the ROW). But bringing the exchange rate into that box would really have complicated the Bank's communication problem.

Aha! Which leads me straight into Kosta's comment!

Kosta: basically, yes, yes, and yes, to your 3 paragraphs. I could bring the Fed and the exchange rate into the picture, along the lines of my transitory vs permanent components sketched above. But it's too hot here today, and I don't feel up to the task! If demand is weak in the US, relative to Canada (which it is), and if this is expected to remain the case for some time, but not permanently, then the Bank should allow the exchange rate appreciation to do most (not all) of the job initially, and then let the interest rate take over more and more of the job as we approach the time at which US demand recovers relative to Canada. Which is what has been happening, so far.

K: your intuition for inflation overshooting is different from Stephen's reasoning. But, it's not exactly *inflation* inertia you are assuming. Price level inertia means the price level doesn't want to jump. Take a derivative: Inflation inertia means the inflation rate doesn't want to jump. Take a second derivative, and we get what you are assuming. I can't think of a name for it. "Acceleration inertia"?

Nick: Yes, by 'headwinds', I thought the bank was talking about the output gap staying below potential. And this didn't make sense to me, because the gap is shrinking fast, and the Bank said nothing about revising its estimates for potential or its forecasts for GDP growth.

That graph made it even more confusing, because we often treat 'long-run rate' and 'neutral rate' as interchangeable expressions.

Stephen. Aha! That explains the Bank's communication failure!

Basically they could have replaced the whole text box with a 20 point bold font line that reads 'THE NATURAL RATE OF INTEREST =/ THE LONG-RUN LEVEL. LOOK IT UP.'

Nick, the Taylor rule is known as a proportional control law in engineering. It's a well established principle that under proportional control the error (the output and inflation gaps) will not converge to zero unless the gains--the scaling constants in the Taylor rule--need to be exactly a specific value.

Mark: Yes. Or they could have said "the natural rate of interest isn't a God-given fundamental constant; it moves around".

That would have been clearer to economists, but less clear to journalists, who are probably quite happy with the concept of "headwinds". "When you are driving and hit an unexpected headwind, your speed will drop, so you have to put the gas pedal down to get back to 100km/h. But even when you get back to 100km/h, you can only ease up on the gas pedal a bit. You have to keep the gas pedal pressed down lower than normal until the headwind subsides".

Jon: good comparison. And that specific value would be the natural rate, which moves around.

But if you added a lagged interest rate(s) to the Taylor Rule R(t) = bR(t-1) + P(t) + Y(t) it would help it converge. Depends on the degree of serial correlation in shocks to the natural rate. With permanent shocks in the natural rate, you would need b=1. With temporary shocks, b less than 1. I have always thought that that argument for a lagged interest rate in the Taylor Rule made more sense than the interest rate smoothing argument. You don't directly observe the natural rate, so something like this would help.

I now think my above response to K was incorrect. K *is* assuming inflation inertia.

"Suppose there is a shock to Aggregate Demand that lowers the natural rate of interest."

How about supposing there is a shock to Aggregate Supply that lowers the natural rate of interest?

You could interpret a currency appreciation that way.

TMF: since output in my graphs and model is defines as the gap between output and potential output, the analysis would be exactly the same whether it was a demand shock, or a supply shock that increased potential output and lowered the natural rate.

Nick, Jon is right. This entire post consists of talking about implementing a Proportional-Integral-Derivative control in real life via interest rates to control the economy. Much as that is extremely complex, the models presented here are exceedingly well known from Control Theory. All the stability criteria are well-studied and stability is the key questions in "will it work"?

Have any economists wandered over to Carleton's Engineering faculty and taken some control theory courses? If not I strongly suggest they do, it would make many of these conversations a lot easier. Or pick up a Control Theory textbook.

Why reinvent the wheel when the wheel already exists, it's just over on a different shelf?

We do teach dynamic programming and optimal control theory. But the Taylor Rule isn't really a solution to that kind of a problem; it's more of an ad-hoc rule of thumb.

It's still being discussed in control theory terms and being modelled that way. Even if it's just an observation and not formally proven, the language and concepts of control theory would be extremely useful to get everyone on the same page and to identify the key concepts.

It would greatly help to get to the root of "what's wrong here, compared to what we know to be necessary for stability? How does this formulation depart from the standard model and what does that mean?"

Several of Nick's recent posts would have benefited from some control theory explanations. Such as the time he created a look-ahead model that wasn't causal.

Not sure what purpose would be served on by employing technical terms few understand. We already have journals for that.

We do teach dynamic programming and optimal control theory. But the Taylor Rule isn't really a solution to that kind of a problem; it's more of an ad-hoc rule of thumb.

In some sense that isn't really responsive. I don't think a discussion of optimal controllers is really appropriate here. It's obvious that building an accurate process model--whether black box or white box--simply isn't tractable, and you elided from 'control theory' to 'optimal control theory' so fast that I'm not the nuance is clear to everyone.

The gains used in the taylor equation are ad hoc, but most of control theory is about analysis not about designing a controller in the sense of a LQR. Those ideas came in the 60s and 70s.

Its reasonable, though, to expect economists to move from the 1800s to the 1930s in their vocabulary and conceptual grounding. Unlike Determinant, I don't feel like the stability results matter here but the language and intuition does. This was a post about economists getting wrong-footed--and its common to hear the taylor rule referred to as an 'optimal policy rule'. That seems to me to hint at a problem in the economics community. One which could be related to the attitude, "its just some technical terms".

Stephen:
As a physicist who switched to economics, I am sympathetic to Determinant's position: a)sometimes we need to master the vocabulary and
b) if we are within tolerance, declare the thing good enough for gunmint work and move on.
I am always surprised by the insistence on precise results when I remember how at Laval 35 years ago we were taught the wonders of back of the enveloppe computations ( which always alarmed one of my primo at the National Assembly in QC City in 81-82 who, as a former nuclear engineer, saw a +- 10% margin with rather less equanimity than me...)

As a former economics journalist, I also understand that most of them don't even reason in NK or whatever terms. They have a vague mixture of economics-as-taught in Introduction to business vocabulary taught in Journalism Dept by some reject from Econ, mixed with whatever confused idiocy they overheard at the Chamber of Commerce rubber chicken dinner.
I'll always remember that day in the summer of 1980 when my section chief at a large daily asked me to make a graph of the discount rate vs prime rate to confirm the amazing insight he just had in a blinding vision: there was a link between the two. As Nick would have said C Oh C...
The Central banks will always have a Cool Hand Luke failure to communicate.

Why do we need to explain this in terms of a long-term depressed neutral rate of interest? Can't we just explain it via expectations? Many/most of the effects of the BoC's interest rate changes should be happening before the changes themselves, because they are anticipated. Thus, we'd expect the BoC's actual operations to have to persist long after the problem appears solved, otherwise they would sabotage those expectations.

I have found myself sometimes thinking in control-theory ways, over the last dozen years. Readers of this blog will be familiar with thermostat analogies, etc. But my knowledge of the sort of control theory understood by engineers is casual -- what I have picked up from various random sources, including comments on this blog. Learning more control theory from engineers would probably be a good thing for me, but so would learning a lot of other things from a lot of other people. And sometimes it's just easier to re-invent the wheel, because that way you really understand it, and also build a wheel that works best for the special purposes you need it.

There is one particular application of control theory I've been working on for the last dozen years on and off. I've talked about it before on this blog, but haven't really gotten the control-theorists to understand what I'm doing, or trying to do. Let me briefly describe it:

Inflation is determined by a black box. All we know is that if we push the control level (the policy rate) down, inflation will increase, with a lag. But there are 101 other possible variables we can look at that might also affect inflation, also with a lag. And the problem is to build a thermostat that can keep future inflation as close as possible to 2%, by responding in the right way and in the right amount, not just to past inflation, but to those 101 other possible indicators. But we don't know what those right ways and right amounts are. And my idea is that we build an initial crude thermostat, which we know will make mistakes, but which can look at its own past history of mistaken responses and improve its own future performance by adjusting its own responsiveness. For example, if the thermostat noticed that, in the past, an increase in oil prices was followed by inflation rising above 2% in future, the thermostat would figure that it hadn't responded strongly enough to oil prices in the past, and so would respond more strongly in future. In other words, each of the parameters of the thermostat has its own thermostat that can adjust that parameter in response to past correlations showing systematic mistakes. If you can forecast future inflation using those correlations by looking at current or past inflation, current or past policy rate, and current or past 101 other indicators, you know the thermostat is making consistent mistakes and you can learn from those consistent mistakes and re-build the main thermostat so it works better. It's a thermostat with a little econometrician inside it. A meta thermostat.

Do any of you control theorists understand what the hell I'm talking about? If not, I have to keep on inventing my own wheel, my own way.

I need to learn more about that integrative/derivative/level whatever distinction (though I think I've got the gist of it).

One difference between engineering and economics is that our black boxes have expectations. This is what lead to the Kydland Prescott paper on time-consistency of optimal plans. The thing it's optimal to promise people you will do in future will not generally be optimal to do when the future arrives.

The Bank of Canada has done some interesting work on the Taylor Rule as a good-enough control system for a black box. They took the Taylor Rule, plus a few other policy rules that were optimal in a particular model, and ran a horse race to see which rule was most robust when you ran it through a dozen different models.

Alex: you lost me a bit there.

Nick:

Your idea is an Adaptive Filter. http://en.wikipedia.org/wiki/Adaptive_filter, a filter that changes its transfer function based on past performance.

As that article states, most adaptive filters are digital because you need memory in order to make them work and computer code allows sufficient adaptability to be implemented.

As Jon and Jacques said it would be nice if economists could master the vocabulary. Then the rest of us could throw in our two bits on how efficient the model really is.

A good formal Control Theory grounding gives you the intuition to know what will work and what doesn't and where you need to go from here.

Okay, the disclaimer here is I don't actually any macro, I just read Scott's blog. So thinking about monetary policy in terms of interest rates has never made sense to me; I think about it in terms of OMOs, because it seems a lot clearer how those actually affect things.

So, the story is that demand for CAD rises and is not (immediately) met by increased supply by the BoC, leading to a shortfall in AD. The BoC decides to do C$X of QE over a period of a year. Expectations respond immediately, and the actual economy recovers fairly quickly because C$X happened to be roughly the right amount. If the BoC then says "oh look things are fine, I guess we don't have to do QE anymore" then clearly the economy would go right back into the recession.

Does this same logic work for interest rates?

That should be "don't actually know any macro".

Determinant: That "Adaptive Filter" idea does sound rather like what I am thinking of. Not sure that I quite get what a "filter" is, however. Is a house thermostat a (crude) filter? Would a more complicated thermostat, that also responded to other variables like windspeed and outside temperature be a "filter"?

Alex: I follow you now. Yes, roughly the same logic works for interest rates.

A thermostat is a control, but controls and filters are deeply related. They are taught side-by-side. Controls have feedback, filters generally do not. Filters remove a part of a signal, controls compare two signals and generate a feedback based on the comparison.

A thermostat is basic closed-loop control.

Both signals and filters are customarily analyzed in the frequency domain (j-omega) or the state-space domain (s).

Both filters and controls are based on transfer functions and getting the transfer function right based on what components you have and what you want to do is what control theory is all about.

To be really, really, really basic, a control is a filter with extra feedback and wiring.

Here you go, http://en.wikipedia.org/wiki/Adaptive_control

All about adaptive control.

Of the top of my head, the Taylor Rule is a Model Reference Adaptive Control.

See the intro to that at http://www.pages.drexel.edu/~kws23/tutorials/MRAC/MRAC.html

I love that diagram. It's delightful. Nick, you just made my day. :)

Sorry, junkie just got his fix. ;)

Determinant:

Aha! That looks like it!

"Of the top of my head, the Taylor Rule is a Model Reference Adaptive Control."

Nope. The Taylor Rule is non-adaptive control law. It's exactly like a standard mechanical house thermometer, or an engine governor, except that it looks at 2 variables (output and inflation) rather than just the house temperature. There are two parameters in a Taylor Rule, and one is set at 1.5 and the other is set at 0.5. They don't adapt. They aren't adjusted in the light of what we learn about the system while it is being controlled, or if the system changes over time.

What I was trying to do was adaptive control. I didn't know that was what it was called. And nobody understood me, sniff!. (Actually, some economists did understand what I was getting at.)

In my approach, you start with something like the Taylor Rule, then watch for systematic errors, then adjust that 1.5 and 0.5 slowly up or down depending on the systematic errors you observe. The aim is that eventually it converges on the right control law (or close to right, if the system doesn't change too quickly over time).

But the math and terminology of that Wikipedia post are beyond me, I think.

Nick: let's say you build your model. If it forecasts inflation better than the tips spread why don't  you start a hedge fund? Shouldn't the CB just target the tips spread, or do you figure they should be able to beat the market like a successful hedge fund?

K: doesn't that lead to the circularity problem?

K:

Forget the Bank of Canada for a minute. Let's think about EMH. I think EMH is neither true nor false. Traders make forecasts, and prices depend on traders' forecasts. Then next period other traders look back over the history of prices, and try to spot systematic errors in past forecasts, and if they find some they adjust their forecasting models, and the pattern of prices changes in response to those changing forecast models. And repeat, forever, while the underlying structure of the world changes, both for exogenous reasons, and because traders' forecast models are changing. EMH is more of a process than a statement about how the world is.

The Bank of Canada is similar. It targets its own forecast of 2-year ahead inflation. And the pattern of inflation we see depends on the Bank's decision rule, which depends on its own forecasting model. My idea is that it starts out with some decision rule, then next period look back over the history of inflation, to see if 2 year ahead inflation could have been predicted, to try to spot systematic errors in its past decision rule, then adjust its decision rule accordingly, then repeat.

Both can be seen as cases of adaptive learning. The second is also adaptive control.

Yes, if the TIPS spread is an efficient predictor of future inflation, and if the market learns faster than the Bank, you might argue that the Bank should just leave the learning to the market, and target the market forecast. Circularity problem aside, I'm agnostic on that question. I was just taking as given that the Bank targets its own forecast, not the market forecast.

The fundamental questions are:

1. Has the neutral rate fallen?
2. If so, what factors would explain its fall?

If everyone believes that neutral is still around 4%, then simple policy rules would dictate that the BoC should have continued with the overnight rate increases that began last year. However, if neutral is now at 2% due to foreign headwinds, then the BoC is justified in keeping the overnight rate lower for a longer period of time.

The challenge for economists is to quantify the impact of these headwinds on the neutral rate; the challenge for the Bank's communication department is to explain all this to the public without fully defining what the neutral rate is, where it was in the past, where it is today, and where it is likely to be in the future.

Nick, you're breaking my heart! You have a PhD, I have faith in your ability to master the math!

Anyway, it all begins with a little simplification. All those s equations are Laplace Transformations. Differentiation is given a differential operator s and integration is defined as 1/s. Then you though in the initial conditions and you get an equation.

PID controllers imply s^2 as the highest dimension in the equation.

Maybe next summer you could hire an Engineering grad student to do the math for Nick's Adaptive Inflation Controller (Patent Pending). Come on, there has to be at least a few papers in that.

Determinant: thanks for your faith in me! But I have to confess you lost me at "Laplace Transformations". I don't know what they are.

Greg:

1. If you believe the IS curve has shifted left, and the LRAS curve has not shifted (or shifted right), then I would say that by definition the natural rate has fallen.

2. That's like asking "what caused the recession?". At least, within the New Keynesian/Neo-Wicksellian paradigm followed by the Bank.

I agree with the rest of your comment.

*Determinant's head hits keyboard*

Nick, Laplace Transforms allow you to linearize differential equations and solve them using ordinary algebra and a transform table!

How could you even look at a differential equation and not think of using them? Dear me, what happened to your math education?

http://en.wikipedia.org/wiki/Laplace_transform

http://en.wikipedia.org/wiki/Laplace_transform_applied_to_differential_equations

The Laplace Transform turns integration and differentiation into division and multiplication.

*Determinant goes over to couch to lie down and recover.*

"Jacques pats Determinant's head with a cold towel as he remember how in 1975 first-term Calculus for economists 1 he learned Laplace Tranforms but then Laval was a math hotbed. Jacques ponders how he was a soldier once and young, or at least a physicist fresh from Mathematical Physics 2 and thought that economists had it easy as they didn't have to go through Kreyszig's Advanced Engineering Mathematics ..."
Jacques also thinks that he should stop thinking about Laplace transforms and should finish booking his trip to the U.S. Air Force Museum as even Ken Kesey likes to look at a few hundreds pieces of good engineering.

Determinant: thanks for your faith in me! But I have to confess you lost me at "Laplace Transformations". I don't know what they are.

Okay, okay but you do know the z-transform right?

In my approach, you start with something like the Taylor Rule, then watch for systematic errors, then adjust that 1.5 and 0.5 slowly up or down depending on the systematic errors you observe. The aim is that eventually it converges on the right control law (or close to right, if the system doesn't change too quickly over time).

Well... usually that handled with the "I", the integrator. So given the premise that some coefficients in the taylor rule will give a steady-state error of zero, you'll find it, but so would simply integrating the output and inflation gap, so that persistent errors produced stronger policy responses.

The gains are really selected on rather different grounds:

- Stronger gains tend to mean a faster settling time. i.e., how quickly following a disturbance does the system return to the targets

- Stronger gains eventually lead to overshoot

- Time lags (such as in computing output even if you believe policy is instantaneous) mean some gains (too weak will produce instability and some gains (too strong) will produce instability.

Andy Harless wrote:


Increase the coefficient on output. If you wish, in order to avoid a loss in credibility, you can also increase the coefficient on the price term by the same amount. What we have then is a more aggressive Taylor rule. It doesn’t solve the convexity problem completely, but it does assure that, when output is far from target, the central bank will take aggressive action to bring it back (unless the price level is far from target in the other direction).

Control theory intuition neatly explains why the gains in the taylor equation cannot be too large.

It also suggests that performance would be improved with an integral term. That's again the "I" and its what Andy's getting at without knowing the language when he wrote:


“Borrow” basis points from the future when there are no more basis points available today.

Interesting, this could be more effective in the economic context because of expectations.

Jacques: thanks for those memories. They give me solace. I took math A-level at school in England. And got a grade of D. But then it was the early 1970's, and a lot of other distractions were happening for a teenage boy. Then a quick one-term math course for incoming MA Economics students at UWO, where I first learned what a matrix was, and how to calculate a determinant (though I have forgotten since). I've been faking it since then.

Jon: "Okay, okay but you do know the z-transform right?"

Nope. Never heard of it. Or maybe I have, but have forgotten. And I can't understand those Wiki pages Determinant helpfully linked for me.

"Well... usually that handled with the "I", the integrator. So given the premise that some coefficients in the taylor rule will give a steady-state error of zero, you'll find it, but so would simply integrating the output and inflation gap, so that persistent errors produced stronger policy responses."

That's different. Suppose the Taylor Rule says R(t) = 5% + 1.5(deviation of inflation from target) + 0.5(deviation of output from potential). And suppose the bank is targeting its forecast of 2-year ahead inflation, trying to keep it at a constant 2%.

If you find that inflation was on average below the 2% target, when that Taylor Rule was being followed, then you know the constant term, 5%, is too high. So you slowly revise that 5% down.

Suppose instead you noticed a positive historical correlation between the output gap and 2-year ahead inflation. That is telling you that the 0.5 coefficient is too small, and should be adjusted up to 0.6, or something. (And if it's a negative correlation, that is telling you that the 0.5 coefficient is too large, and should be adjusted down to 0.4).

And if you noticed a positive correlation between current inflation and 2 year ahead inflation, that is telling you that the 1.5 coefficient is too small, and should be adjusted up.

And if you noticed a positive/negative historical correlation between X and 2 year ahead inflation, that means you should add X to the Taylor Rule, with a small positive/negative coefficient. (Assuming the Bank observes X).

If anyone is *really* into this, here's my old stuff:

economics.ca/2003/papers/0266.pdf

My point was slightly different which comes down to: there are simpler
Methods in common use.

I agree that what you want is different. What I want to know is why you think it's better ( which it may well be )

Jon: OK. Let me first try to understand better your "simpler methods in common use".

In order to handle the case where the constant term is too big (the 5% should be 4%, so we get inflation on average below target), I can think of two different versions of that "simpler method":

1. Change the Taylor Rule to: R(t) = 5% + b[R(t-1)-5%] + 1.5(inflation gap) + 0.5(output gap)

2. Change the Taylor Rule to R(t) = 1.5(inflation gap) + 0.5(output gap) + b(integral of inflation gap over time) (b less than or maybe equal to one)

1 is what the central bank seems to do in practice. It has been rationalised as "interest rate smoothing", but I see it more as a way for the Bank to adapt towards a natural rate that is moving over time in some sort of persistent way.

2 seems to me to end up in price level targeting, which might be a good policy, but if what you are trying to do is inflation targeting, this doesn't seem the right solution.

Plus, neither of those two methods seem to deal with the case (the one I concentrate on) where the coefficients 1.5 or 0.5 are wrong, so the Bank is over-reacting or under-reacting to fluctuations in inflation or output.

The scale factor of the integral term can be above one just as the constants by the proportional terms need not sum to 2 or 1 or any other value you may pick.

Second whether this behaves as price level targeting will depend on the magnitude of the proportional and integral terms. Indeed, I'll claim that price level targeting is equivalent to omitting the proportional terms.

What this means is, in part, that level vs rate targeting is a false dichotomy.

The integral term will always drive the steady state error to zero. Now I suspect that's not what you mean by over an under reacting.

Perhaps what you have in mind are the two ideas I mentioned before namely how quickly the policy returns us to equilibrium after a disturbance and whether it does so with overshoots.

So which of those ideas do you have in mind?

Jon: Let me try to formalise my ideas a bit more.

Let P(t) be inflation, R(t) the policy rate chosen by the Bank, and I(t) the Bank's information set at time t. Assume perfect memory by the Bank, so I(t) includes I(t-1) etc. I(t) also includes R(t). (I.e. the Bank knows what policy rate it is setting, and has set in the past.)

The Bank has a reaction function F(.) that sets R(t) as a function I(t). R(t) = F(I(t)).

The Bank is targeting 24 month ahead inflation. It wants inflation to return to the 2% target 24 months ahead, and stay there. (It fears there will be bad consequences if it tries to get inflation back to target more quickly than 24 months ahead). But there will be shocks that it can't anticipate, so it knows that future inflation will fluctuate. Instead, the job is to choose some function F(.) such that:

E(P(t+24) conditional on I(t) and F(.)) = 2%

(And E(P(t+25) conditional on I(t) and F(.)) = 2% etc.)

Suppose that the Bank has been using some reaction function F(.) for the last 20 years. Looking back on the last 20 years of data, we estimate a linear regression of P(t)-2% on I(t-24). We exclude R(t-24) from the RHS that regression, because leaving R(t) in would cause perfect multicolinearity between R(t) and the other elements of I(t). If the Bank has chosen F(.) correctly, the estimate from that regression should be statistical garbage. All the parameters -- the constant and all the slope terms -- should be insignificantly different from zero.

If that linear regression is not garbage -- if some of the parameters are non-zero -- we know the Bank has made systematic mistakes over the past 20 years.

If the constant is non-zero, we know the constant term in F(.) is wrong. If the slope on some variable X(t) within I(t) is non-zero, we know the coefficient on X(t) in F(.) is wrong. The Bank either over-reacted (the coefficient on X(t) was too large) or underreacted (the coefficient on X(t) was too small) to changes in X.

We could also estimate a regression of P(t)-2% on R(t-24). That regression should also be garbage, if the Bank has chosen F(.) correctly. If the estimated slope coefficient in that equation is non-zero, that tells us whether the Bank overreacted or underreacted to fluctuations in everything else in I(t).


Yes... I understand your view here.

But, suppose: its not generally possible for a causal system in the presence of lagged information to exhibit the property you mention, no matter how much you tune those constants.

I assert that the reason this must be so is that following a disturbance, you must react exactly strong enough to reject the disturbance in zero time and then go slack. A time lag represents a filter--events higher frequency then the lag are not visible. So you cannot have a step-response as would be necessary for there to be no systematic mistakes--since the fourier transform of such a step-response has frequency components higher than the lag.

So does your scheme have a unique solution at least? (or is there a family of solutions which achieves some lowest possible correlation--I think this more likely)

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad