« The New Keynesian confidence fairy multiplier | Main | Revenue Deficiency, Health Care Sustainability and the Fiscal Dividend »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

"I don't understand you, therefore you must be stupid." That is how you are thinking right now, and it is the reason everyone is ignoring you.

That, and the fact that what you are saying is irrelevant to their point. Nobody is claiming that one measure of inflation is more biased than another. Look at the Y-axis of that Atlanta graph; it is RMS error, not bias. The variance of a forecasting measure is just as important as any supposed bias; I could forecast inflation with sunspots, and might not have any bias, but so what?

And that isn't even the main point, which has to do with the X-axis. The units of this are the filtration (information set) used to make inflation forecasts, and what they show is that inflation is not a Markov process; you make better forecasts by considering more history. That is true regardless of the measure used to make forecasts, but it is much more true for headline than for core inflation. Thus, if you do want to make forecasts based on a restricted history, you should use core.

A couple of points:

1. The Fed does have a dual mandate, so something that is correlated with future inflation might be something that is correlated with the other part of the Fed's mandate and therefore causes the Fed to deviate intentionally from its inflation target. In particular, I think the Fed took an opportunistic attitude toward inflation during most of the period being studied: in order to avoid producing unnecessary and undesirable effects on employment, it avoided aggressively pursuing its target and instead only resisted moves away from the target, while allowing random events to move inflation toward the target. Correlations with future inflation could represent the influence of such random events rather than failures of policy or changes in policy.

2. A central bank doesn't have much control over the inflation rate at very short horizons, and it's debatable whether it has complete control even at fairly long horizons. Something that's correlated with future inflation might be correlated with the uncontrollable part. In particular, the predictive power of core inflation is interesting in that it may reflect something that the Fed can't control at a short horizon but my be able to control at a longer horizon. (A 3-year inflation rate is going to include maybe a year of inflation over which the Fed has little control, averaged in with the later inflation which it arguably does control, but since its target is a rate rather than a level, the first year will bring up the average if the next 2 years are on target.) This could be useful to the Fed as an indicator of when it needs to tighten (or loosen).

This is an immediate implication of rational expectations

It's interesting to contemplate the mindset in which this statement is equivalent to, "This is obviously true."

Phil: I don't think they are stupid. I just think they haven't read my posts (almost certainly true). So I'm standing up, waving my arms around, and talking loudly to try to attract their attention!

It's not about bias. It's about R-squared (or any other measure of forecasting power). The R-squared of deviations of target variable from target on anything in the bank's information set should be zero, if the bank is doing it right. Sunspots should be exactly as good as inflation, core inflation, or anything else. They should all look useless as forecasters.

It's not about history. Unless the bank forgets stuff, its information set at time t should include its information set at time t-1, t-2, etc. No matter how much history you include in the information set, it should all be useless, both individually and collectively, in forecasting inflation, at the target horizon.

Andy:

1. Yes, the dual mandate might complicate things. Especially if the target horizon for unemployment were different from the target horizon for inflation. (Otherwise you could just make the target a weighted average of the squared(?) deviations from target.) But even with a dual mandate, whther we would expect a positive, negative, or zero correlation between current (core or total) and future inflation would depend on the model.

2. Yes, we shouldn't be surprised if there's a positive correlation between current and future inflation at very short horizons. But the way I test this for Canada, the Bank of Canada says it is targeting year over year inflation at a 2-year horizon. So there should be zero correlation between the latest CPI data, and the change in the CPI between 12 months to 24 months from now.

JW: It is obviously true that central banks *ought* to make rational forecasts when choosing monetary policy. And that's what I am saying. Whether they do *in fact* make rational forecasts is something I want to test, and I use this method to test. If they do in fact violate rational expectations, I want to advise them of that fact, so they can improve their monetary policy.

Krugman weighs in: http://krugman.blogs.nytimes.com/2011/06/02/core-madness-wonkish/

Sorry Nick, I'm not getting it. So say Altig et al don't understand the subtlety of what they're doing by measuring inflation. Say they set money supply and lending rates based on whatever they measure and this line of action is flawed. What actually results from this?

In other words, yeah they might not "get it" but so what? What specifically are they doing wrong right now in terms of setting monetary policy, and what are the reasons and portents they have it wrong?

jesse: I don't know (without running some regressions) whether the Fed has been getting it right, or whether the Fed should be looking at core, total, or a bit of both. I do know that the tests they are using for whether the Fed should look at core, total, or both, are wrong. For example, if core was a very useful thing for the Fed to look at, and if the Fed was looking at core, and responding correctly to core, their test would say that core was useless, because core would fail to forecast future inflation.

Damn. Maybe I'm still not saying it clearly enough!

I haven't actually worked through all this, but is seems to me as though this was the whole point about the rational expectations econometrics literature. If people are incorporating all information, errors will be orthogonal to to the available information set. And here, all deviations of inflation away from target would be considered to be errors - and therefore uncorrelated with anything we observe now. Including, presumably, current core inflation. Econometricians could - in principle - use that orthogonality condition as a fulcrum to estimate a certain class of models.

Is that a restatement of your point, Nick?

Let's say that the Fed doesn't hit its target very closely over the relevant horizon. Either policy isn't very effective, or they aren't setting policy the way they say (or you think) they are. Either way, most of the variation in inflation is exogenous to policy. Then the method you're criticizing would be fine, right?

So what makes you think that policy is sufficiently targeted, and sufficiently effective, over the relevant horizon, to exhaust the information content of past (core or whatever) inflation?

If the Fed was making systematic mistakes, then you'd get the correlation. But the absence of the correlation could simply be that those errors aren't sufficiently strong/systematic to identify. And the issue appears to be an absence of correlation.

Stephen; "Is that a restatement of your point, Nick?" Yes. That's a relief; Stephen understands me.

JW: suppose there were some potentially useful indicator X, that we knew the Fed was either unaware of, or for some reason was ignoring. Then we could indeed test whether X was a useful indicator by the simple method of seeing if X forecasts future inflation. But we know that the Fed is aware of core, and we think the Fed responds to core, so this simple test won't work. If we did find that core nevertheless forecasted future inflation, that would mean that the Fed wasn't responding to core strongly enough. But if core failed to forecast future inflation, that wouldn't mean core was useless as an indicator. It might instead mean that the Fed is already exhausting all the information content in core.

Huh. Well then, I don't see why this is so hard a point to get across. It seems that it should be perfectly straightforward. To those of us of a certain age, that is.

Maybe I'm missing it, but the crux of Bryan's post is whether or not headline inflation, even if elevated for a year, can be a portent of runaway or elevated inflation in the medium term. That is, if someone like an OECD boffin points to headline inflation being high for a year and argues convincingly we should begin to get really, really worried that this is a permanent trend, Bryan is stating that, in fact no, the core inflation measure will still give you a better indicator of inflation in 2 years' time. This is more a statement, timely stated I'm sure, that policymakers should put less weight on headline inflation than core when setting monetary policy because, at least insofar as past data can be used as future predictors, the core is still the right thing to track when trying to accurately predict averaged headline (and not necessarily core) inflation.

What I take away from his post is that measuring core is better than trying to second-guess headline inflation, even after a year of elevated headline inflation numbers and immense political pressure to raise rates.

Jesse: and I'm saying that is the wrong message to take away.

For example, suppose you found that headline inflation did not forecast future inflation, and that core did forecast future inflation. The message is not that the Fed should ignore headline and look at core instead. The message is that the Fed is already responding correctly to headline, by exhausting the information content of headline, but, at the margin, the Fed should respond more to core.

And if both headline and core inflation forecasted future inflation, at the targeting horizon, the Fed should respond more strongly to both.

(Unfortunately, they ran the regressions from 1985 to present. So you can't really make any such judgements, because the target was presumably falling in the late 1980's to early 1990's. This method only really works if you look at data during a time when the target rate of inflation stayed constant.)

Nick, you are making some incredibly unrealistic assumptions about the ability of the central bank to control inflation -- basically you are saying that the central bank can instantly control inflation with a random noise error term that is not serially correlated.

Let x_n = r_n - b_n, where r is the "natural" rate at time n, and b_n is the short nominal rate at time n. x_n would be the amount of steering. Suppose it takes time for the intervention to propagate:

Let I(n,k) = affect of steering by x_n on the inflation rate in period n+k:

1/2x_n in period n
x_n in period n+ 1
2x_n in period n+2
x_n in period n+3
1/2x_n in period n+4
0 thereafter

And suppose that the total inflation rate in period n is the sum of the influence of the current steering plus all the past steerings:

p_n = 1/2(x_n) + x_{n-1} + 2x_{n-2} + x_{n-3} + x_{n-4} + white noise

Now suppose that the objective of the CB is to minimize the running average deviation of inflation from 2%.

And assuming the CB cannot predict the natural rate but merely responds to observed changes in this rate, then because there are lags in the effect of the steering, the errors will propagate for 5 periods, with inflation below target for that time, even if the CB is doing the best job it can.

In that case, you will be able to predict future the future deviation from trend based on the past deviation from trend.

Now suppose that the natural rate cannot be observed, but other variables, {Z_i} can be observed -- say output or employment:

x_n = b_n - f({Z_i})

In all cases, the CB does not try to predict the future variable, but merely responds to changes in current employment, etc, then because of the lags, it will appear that the Z_i is a good predictor of future inflation. And that would be a _criteria_ to determine whether Z_i should be in the CB's information set.

The Atlanta Fed is doing the correct analysis.

OK I'm going to go away and think about it. I'm familiar with a more scientific focus so this comes across to me as a control systems problem, with medium-term averaged headline inflation as the entity being controlled (I think). The setpoint of the controller is the inflation target. The question is what measures and estimators to use to make the system well controlled. Since their targeting policies have a lagging effect, they need the ability to estimate future inflation to determine the "effort" needed to maintain zero error, but also need to be aware their feedback measurements are not perfect. The question, posed differently, is should they be changing their control system, including their measures, to further reduce future error. This is a more difficult problem to analyze when the control loop cannot be "broken" and analysed open-loop.

So yes I agree the control loop is closed and this has implications for measuring the data. And I'll think about it.

RSJ: "Nick, you are making some incredibly unrealistic assumptions about the ability of the central bank to control inflation -- basically you are saying that the central bank can instantly control inflation with a random noise error term that is not serially correlated."

No, I'm not saying that.

The Bank of Canada (which is explicit about its inflation target) says that it targets the year over year inflation rate at an 18 month to 2 year horizon. It is certainly not instant control. It is not even trying for instant control.

And, if it targets inflation at a (say) 24 month horizon, and has rational expectations, the error process should be an MA(23) process (using monthly data). None of this affects my statement that deviations of inflation from target should be orthogonal to the 24 month lagged information set, if the Bank is doing what it says it is doing.

jesse: I think that's exactly the right way to think about it.

Nick,

No, the window over which the bank *targets* inflation is not the same window over which the steering terms have an effect.

The former is a policy variable whereas the latter is an endogenous variable.

The serial correlation of inflation will be determined by the endogenous window, not the policy window. Rational expectations is not some wand you can wave to make everything a markov process. If there are serial correlations in the endogenous variables, then there will be serial correlations in the error term, provided that the steering takes effect with a lag.

RSJ: the Bank of Canada is saying, in effect "Given our monetary policy instrument settings, and given the information we have available, we expect the inflation rate 24 months from now will equal 2%". I am taking them at their word, and invoking the absolutely standard test to see if their expectations are rational -- orthogonality of their forecast errors to their information set 24 months prior.

jesse: what I am saying here ought to be generalisable to any closed loop control system. You have a target variable (the room temperature) and a target (20C), and an instrument (the lever on the furnace) and a set of indicators (which would include the room temperature itself, the outside temperature, the windspeed, and lagged values of those same indicators), and a targeting horizon (you want to bring the room temperature to 20C in 5 minutes). There is an unknown optimal reaction function which sets the instrument as a function of the indicators. We know we have found the optimal reaction function when the room temperature cannot be forecast from the 5 minute lagged indicators and instrument. And if we observe a non-zero correlation between room temperature and lagged indicators or instrument, that tells us in which direction we need to adjust the reaction function to grope towards optimality.

"Given our monetary policy instrument settings, and given the information we have available, we expect the inflation rate 24 months from now will equal 2%". I am taking them at their word,"

OK, the Cb can announce whatever policy it wants, but that doesn't affect the endogenous lags, right?

I interpret these policy windows as measures that define the errors, not guarantees that the errors are not serially correlated. The CB cannot make such guarantees.

For example, a policy to minimize the variance between the moving average of inflation over a 2 year period and 2% will give a different measure of error then a policy to minimize the variance between the monthly average of inflation and 2%.

Nevertheless, neither of these two policies can promise that the errors are not serially correlated. One is a policy statement and the other is an economic statement about how changes to short rates propagate through to the rest of the economy. No CB policy position can affect that.

RSJ: The bank of canada targets inflation 2 years in the future. It does not target the average inflation rate between now and 2 years in the future. If it tries to target inflation at too short a horizon then it will run into problems, either excessive variance in employment, or "instrument instability" (where it has explosive oscillations in the instrument). But it says it can target inflation 2 years in the future. And yes, the errors will be serially correlated but only up to a 23 month MA process. That means the errors are still unforecastable at 24 months. MA and AR are different.

jesse (again) think of it as a *meta* thermostat, which adjusts for systematic errors in its own reaction function, by watching for past correlations between target and indicators.

Nick: if the bank were "doing it right" every move would incorporate all available information. So you wouldn't be able to predict the next move from the previous one. Obviously that isn't the case. Ie they don't even pretend to be doing their best. Instead they grossly underreact (it's a feature of agency and committee incentives that the cost of overreacting is much higher than the cost of underreacting), which is why the current deviation from target is a good predictor of the future deviation. Rating agencies have the same funny behaviour: if the current rating were their best estimate, howcome the last rating change predicts the direction of the next one?

So assume they'll do little bits here or there and definitely try to do something if things really get out of hand. For the most part, assume they aren't doing much. The evidence says it's true.

"And yes, the errors will be serially correlated but only up to a 23 month MA process. That means the errors are still unforecastable at 24 months. "

But they *are* forecastable. How do you explain that?

Or, let me turn this around:

Suppose you are deciding whether or not to take the CB at their word. What statistical test would you perform?

Nick,

you've ignored the most important point, made by Andy Harless. He said " it's debatable whether it has complete control even at fairly long horizons".

I'll go one further, in an open economy it is possible for a central bank to have no control over it's inflation rate (even if that CB is not trying to maintain an exchange rate peg).

Here's the examnple:

Imagine that their are only two currencies, the domestic currency called dollars and the foreign currency called globos. The idea here is that the entire world outside the domestic country has formed a currency union.

Further, suppose that goods trade is entirely frictionless. No shipping costs or anything to prevent arbitrage of different prices. Thus PPP holds perfectly all the time.

Finally suppose there is absolutely no capital movement between the domestic country and the rest of the world because while the rest of the world is allowed to hold dollars the domestic residents are forbidden from holding globos. Assume that somehow the capital controls are 100% effective.

Now suppose that the rest of the world pegs the nominal globo/dollar exchange rate unilaterally. Suppose that this peg is 100% effective (possible if the capital controls are 100% effective).

Now suppose the rest of the world chooses an inflation rate of -10% per year. That is, they choose 10% deflation.

PPP and the nominally fixed exchange rate mean that the domestic country also gets 10% deflation no matter what the CB does.

If the domestic country is unwilling to either put in their own controls on capital (which then prevents trade in goods since the foreigners can't hold dollars) or somehow interferes in the goods market in another way (to break PPP) then they will have no control over their inflation rate at all despinte having complete control of the domestic money supply.

I should probably elaborate a bit more on how the example is supposed to work.

It might be wondered how the domestic central bank can print unlimited amounts of dollars without causing inflation somewhere, perhaps not domestically but in the foreign economies.

The mechanism would work as follows:

If domestic prices failed to fall in line with foreign prices then dollars would flow out of the domestic economy. Maintaining the exchange rate peg then forces the foreign authorities to absorb the dollars.

Now, to do this they may well have to issue globos and buy the dollars so how do they maintain their deflation? Well, they then have to tighten monetary policy some other way, higher interest rates or reserve requirements (maybe eventually exceeding 100%) which takes the globos back out of circulation.

The net effect of this sequence of actions is that the dollars have effectively been confiscated. That's why I said they "may" have to issue globos to buy the dollars. The might just directly confiscate the dollars.

Either way, any time the domestic price level tries to move above the foreign price level the goods market arbitrage that enforces PPP will cause all the extra dollars to flow out of the domestic economy where they are simply confiscated and taken out of circulation by the foreign authorities.

The domestic central bank is then powerless to change the domestic rate of inflation through monetary policy.

Adam: By chance, I had just finished reading your blog post before checking in here. (Nice clean thought-experiment, by the way, which really clarifies things. You just need to add a vacuum cleaner in the foreign country to make it really clean, Ugh! That pun was not intended!).

In your thought-experiment though, the whole question of whether the central bank should look at core vs total becomes moot. It doesn't matter what it looks at, because it can't do anything. So the inflation target cannot be credible, whatever it does.

yes, but of course the real world is somewhere in the middle. The point was that the CB may have a lot, but not complete, control of inflation. This then admits the possibility of them doing everything right, as in doing the best they can, and still having forecastable errors.

Adam: thinking through your thought experiment some more, I come up with the "What happens when an unstoppable force meets an immovable object?" question. See my comments on your blog.

Adam: But in the real world, were talking about a CB that's missing by a percent or two. If they have any control at all, they have the power to affect inflation by a percent over a three year horizon. If they move rates by, say 10% or so, I'm pretty sure they'll even outdo themselves. Instead, they always underperform. And anyways, it's not a constant bias; it's a failure to react to foreseeable changes in both directions.

Or maybe, they underperform because they are targeting the asymptote, and the positive serial autocorrelation, is just the residual of shocks that haven't fully dissipated. If the process has some second order behavior (inertia), they'd have to overshoot to get zero autocorrelation at some particular interval.

Nick,

This reminds me of your monetary policy as a thermostat post. I think I get it, but just to be sure let me throw at you the following scenario. Say the central bank is only using core inflation as its indicator variable to which it responds in order to hit a 2% headline inflation target two years from now. If the central bank is doing its job well, then core inflation should not be correlated with future headline inflation. Core inflation, however, should be negatively correlated with the instrument of monetary policy. That is, because the central bank is doing its job so well, it must be systematically responding to changes in core inflation so that there will be no deviation from its headline inflation target.

Does that sound right? Thanks.

So in a perfect world, the Fed (and Altig) would be looking at an indicator that doesn['t work, but in the world we have, the Fed is looking at a useful indicator. So the issue we ought to care about is whether the Fed accurately understands the imperfections, if any, with which it must deal.

Altig's charts suggest that the imperfections we have make core CPI a good guide for policy.

jesse: what I am saying here ought to be generalisable to any closed loop control system. You have a target variable (the room temperature) and a target (20C), and an instrument (the lever on the furnace) and a set of indicators (which would include the room temperature itself, the outside temperature, the windspeed, and lagged values of those same indicators), and a targeting horizon (you want to bring the room temperature to 20C in 5 minutes). There is an unknown optimal reaction function which sets the instrument as a function of the indicators. We know we have found the optimal reaction function when the room temperature cannot be forecast from the 5 minute lagged indicators and instrument. And if we observe a non-zero correlation between room temperature and lagged indicators or instrument, that tells us in which direction we need to adjust the reaction function to grope towards optimality.

Oh dear Nick. No. Really, no.

Let me put my Engineering hat on and go to work.

Closed-loop systems feed the output back into the input. With negative feedback, the control makes the output convergent to the input, with some error. How well the control actually achieves this is a property of how the control is constructed, whether it is first or second order, system tolerances, etc.

The actual input to the control is the error after the feedback reference, not the raw inflation input itself.

The key is that we can observe both the output and the input to see how well the control is functioning. Error in this case is simply the opposite of efficiency.

We live in the real world, error is always present and efficiency is always less than 100%.

Furthermore, core inflation is not orthogonal to total inflation. Since core inflation is a subset total inflation, it is subject to the same control transformation as the other total inflation components. Now, if we deconstruct the Fed's inflation control mechanism and compare a core-to-core system, we should get a similar function with different proportionality. If we think of the Fed as using a second-order Proportional-Integral-Differential controller, the Proportional term will change but the ID parts should be the same.

What the Fed is doing is measuring the error and efficiency of their target with respect to core inflation. Fine. I would quibble with the fact that the input is core and output it total, which is only a partial match. They ought to be comparing apples to apples, so core to core, or total to total unless they have ample justification for their choice, which should be rigorously documented.

"We know we have found the optimal reaction function when the room temperature cannot be forecast from the 5 minute lagged indicators and instrument. And if we observe a non-zero correlation between room temperature and lagged indicators or instrument, that tells us in which direction we need to adjust the reaction function to grope towards optimality."

I disagree it necessarily "tells us" much. It MAY tell us something but there is also a chance that we are reacting to noise, which would make the system inherently less stable. The heating a room analogy is a good one and I'm not stating this because I have some engineering background. The analogy is that we are heating a room with a big furnace but the furnace applies heat unequally to different parts of the room. Our one "core" thermostat on the inner wall measures the temperature as (say) 20C but right beside the heating vent by the "fringes" it will be higher than 20C. My old gran sitting beside the vent thinks it's too hot and asks me to turn down the heat. But my thermostat still reads 20C so... what to do to appease my old gran? The question to ask is whether I should add a couple of extra thermostats in other parts of the house to get a better idea of what's going on, and that would adjust the effective setpoint, putting slightly less weight on my "core" reading and more weight on my "fringe" readings.

The bank is saying, well, yes people will get hot but we know based on the irrevocable laws of thermodynamics that if the fringes stay hot for a super long time, this will eventually start showing up in the core and the control effort from the furnace will commensurately decrease. When this happens they expect the fringes to cool down faster (because they're closer to the exterior walls) and the overall average to be maintained. In other words, fringe temperatures are inherently more volatile. So the question in my mind is, if they do start considering the fringes, will this make the system more or less controllable? If it turns out that the fringes self-correct because someone opens a window or puts their foot on the vent, or that the sensor is in a high airflow region and produces many false positives I would argue it's not. And the bank is effectively stating this: only having one thermostat is fine because historically the fringes will take care of themselves. Maybe they can do better by measuring other things too but, historically, they don't have to in order to fulfill one of their mandates.

And Determinant (LOL) is right that there should be error in the system, both due to noise (in fact noise is often required to keep a system stable), and due to actual price movements. According to control systems theory, if inflation is on the move there must be non-zero "tracking error" (the difference between setpoint and measured) and this error will generally increase as the second derivative of CPI (i.e. the change in the inflation rate) increases.

Thanks for your patience. Hopefully some "orthogonal" thought helps gel some ideas for us.

Another point is that while adding more "fringe" weight to adjusting monetary policy may increase controllability, it also means the control effort may need to be adjusted more frequently. This has some broader implications and unknown consequences. If the Fed is seen as adjusting things quickly and in increments that are either too large or too spaced out, they may cause more problems than they solve. This is analogous to using a digital controller. Imagine we are controlling a furnace but can only adjust the temperature in increments of 0.25 degrees and only every 3 months. If that's the case, there is more chance the system will end up less stable because of quantization errors. So there is some impetus to only track slowly changing events that have proven to be controllable given the time between updates and the minimum step size. On this front we're in some ways lucky that most of the economy pretty much takes care of itself because the alternative could mean more frequent policy changes. But that's another topic.

Does anybody know how to turn off italics?

David Beckworth: Yes, if core is the only indicator the Fed follows, and if it is responding to core correctly, there should be a negative correlation between indicator (core) and instrument, and a zero correlation between headline inflation and (lagged) core, and also a zero correlation between headline inflation and (lagged) instrument.

Determinant: "We live in the real world, error is always present and efficiency is always less than 100%."

Obviously. Inflation will not stay exactly at 2%, because the Bank does not have a crystal ball. But if it is responding optimally to the (imperfect) information that it does have, those fluctuations in inflation will be orthogonal to that (lagged) information set.

You lost me on the rest. I think you are forgetting the lag/targeting horizon. The BoC targets *2 year ahead* inflation.

jesse: "I disagree it necessarily "tells us" much. It MAY tell us something but there is also a chance that we are reacting to noise, which would make the system inherently less stable."

Agreed. The zero correlation between target variable and one particular element in the information set is a necessary, not a sufficient condition. But remember that the instrument is also part of the information set. If the instrument were reacting to irrelevant noise, there would be a non-zero correlation between the target and the 5 minute lagged instrument, which also violates the orthogonality condition. Set monetary policy to the roll of a die, for example, and there will be a non-zero correlation between the die and future inflation, and between the monetary policy instrument and future inflation. Which tells us that monetary policy is responding too strongly to the die.

In the example with your gran, you are mixing 2 separate questions: what is the target variable; and what is the (set of) indicators we use to help us control that target variable.

(I'm not spending as much time as I would like responding to comments. All 4 of us WCI bloggers are at the CEA meetings today, tomorrow, and Sunday. So that's all for now.)

No Nick, I'm invoking basic control theory to say that your original hypothesis is incorrect. Looking at it through this lens provides a vastly different perspective.

I'm throwing rational expectations out the window and rephrasing the question. Nobody is perfect; we work with that we have. Rational expectations is an unrealistic assumption that Engineering, more used to dealing with the real world doesn't use. I'm assuming the control mechanism does not change with time; all control theory assumes this.

One wishes to compare the usefulness of a subset of total inflation, one with less total noise, to control total inflation. Fine.

See this page on what a closed-loop control looks like: http://en.wikipedia.org/wiki/Closed-loop_transfer_function

I can feed core inflation in and look at total inflation coming out. Time invariance of the control function lets me do that.

This is the difference between the economics and the engineering mindset. Engineering looks first at the plumbing to see what's possible. We see what process we will use, what errors are likely and can be tolerated and go from there to get the best result we can. Economics wants to get a perfect transformation from set 1 to set 1 and doesn't care about the plumbing. Then whet you get error or imperfection complaints start to pile up.

From an engineering standpoint of control theory, rational expectations is an entirely unrealistic assumption. I've seen this difference before when I mentioned Ben Graham and his anticipation of index funds. My rebuttal then was that process matters. In engineering process matters. Index funds didn't exist in the 1940's because the concept of mutual funds was still new. Vanguard sold its product in the 1970's on the basis of low management expenses using the existing retail mutual fund model. That couldn't happen without the existence and performance record of actively managed funds. The process of getting there matters.

Or to put it another way:

You want a control to use a specific rule and adapt perfectly to any given information set such that there is no systemic error. You can't have a rules-based system without systemic errors.

Well, Krugman doesn't seem to have your type of problem with it.

Rational expectations is about expectations.

What the Fed can or cannot do is a question of capacity.

If the best the Fed can do is control inflation with serially correlated errors, then rational expectation will expect serial correlation of errors.

Central Bank announcements and policy targets are about politics and marketing.

By combining all three, Nick is arguing that we are in a wonder-world in which CB marketing must be matched by capacity, because Nick believes the marketing, and because of rational expectations, he must be right, therefore the capacity must be there.

It is so sneaky it is funny.

Which is why I asked whether a rational person should believe the CB or not, and how they could tell whether they should.

Suppose we don't all take the CB at their word?

Thanks RSJ.

The engineering mindset does not sit well with marketing. We look at capacity and couldn't care less about marketing. Engineering education is all about analyzing and optimizing capacity.

A little OT, since it wasn't Nick's point but the macroblog graph is confusing me: core is headline with noise removed. So, when using each as an estimator, shouldn't we expect their RMS error to converge as we increase the sample size? Intuitively, the noise 'comes out in the wash'. If that's the case, I'm really not getting what the original St. Louis Fed is on about. It'd be news if headline and core didn't converge something like 36 months out, but they do. So all is well, right?


"you are mixing 2 separate questions: what is the target variable; and what is the (set of) indicators we use to help us control that target variable."

The goal is to control headline inflation (average room temperature), no? That never changed. Certain pockets are experiencing high levels of inflation and are beating the drum to do something about it (i.e. my gran). What is up for question is what measurements to make to improve the system. As I stated previous, coming from a control engineering perspective, yes we possibly can control the system better with more variables but we don't have to. Monitoring the core is enough and the data back this up. Whether or not this is due to expectations is a separate discussion. We need not dissect the inner workings of the machine, which includes "expectations" to observe that inflation, both core and averaged headline, is well behaved under current policy.

@Patrick, the Fed is under immense pressure to raise rates based on headline inflation. The Fed's post was an attempt to explain why they don't have to react to recent upticks in headline inflation.

Have a great weekend.

I didn't get that the St Lous Fed piece was defending core. I could be wrong. Maybe they're just using Fed speak or I don't speak no good english ... but if Altig and the St. Louis Fed guy are both looking at the same data and coming to different conclusions ... Yikes.

RSJ: Let me accept your model:

"And suppose that the total inflation rate in period n is the sum of the influence of the current steering plus all the past steerings:

p_n = 1/2(x_n) + x_{n-1} + 2x_{n-2} + x_{n-3} + x_{n-4} + white noise"

But make one change. Replace the white noise shock by a shock that is partly forecastable 3 periods ahead. For example, the shock could be an AR(1) process.

You (I mean "you" because my arithmetic is cr*ap) will now be able to solve for the optimal reaction function x(t) = R(x(t-1), x(t-2),x(t-3), x(t-4), shock(t)) that makes E[p(t+3) conditional on Information at time t and earlier] = 0% (or 2% or whatever). And you will also find that the deviations of inflation from the target will be an MA(2) process, and that p(t) will be uncorrelated with any variable dated (t-3) or earlier. That means the bank can target inflation at a 3 period horizon.

You will also find that if the bank tries to target inflation at a 1 period horizon it isn't really feasible, because this will lead to explosive oscillations in x(t). (I think you will get the same thing if it tries to target inflation at a 2 period horizon as well).

Then, maybe, you will see what I am talking about.

"Replace the white noise shock by a shock that is partly forecastable 3 periods ahead"

Yes! Then you would be right. But I am assuming that shocks are not forecastable by the CB, which is why in my model I argued that the CB reacts to the decision variables as they are revealed, rather than before they are revealed.

This is how actual control processes work. You do not get to leap ahead of time. To the best of my knowledge, neither the current recession nor the housing downturn was forecast by the CB, neither was the oil embargo, etc.

So it again boils down to capability versus marketing. On what basis should we believe that the CB is capable of forecasting shocks? We can look back at previous Fed meeting minutes and see what they were worried about -- I believe the concern du jour prior to the crisis was a collapse in the value of the dollar.

Now maybe I am wrong, and the CB *is* able to forecast shocks and that CB inflation errors are not serially correlated. I am open to that. But at least a case needs to be made -- it's a non-obvious assertion that requires evidence, right? And even if this is true for a period of time, what would guarantee that it would continue to hold in the future?

In the above, I should have said that the CB reacts to the state variables, obviously!

But about rational expectations, the Michigan survey is useful because they asked people about inflation expectations for the next year as well, and at least in that survey, inflation expectations lagged behind actual inflation on the way up (pre-Volcker) and subsequently the public was over-estimating inflation on the way down. You can argue that the mean over the entire window was zero (I haven't checked this, but it seems reasonable), nevertheless the errors were serially correlated.

It's one thing to say that, on average, expectations will be rational ex-post, when averaged over long time periods, and quite another to say that errors wont be serially correlated, or that you have a hard bound on what the time interval will be.

I think the latter requires that people be endowed with some organ that they don't actually have, whereas the former requirement is easier to believe. And in this sense, I view the CB as having no better predictive powers than those who answered the Michigan survey.

RSJ: we are now getting closer to being on the same page (thankfully!).

"Now maybe I am wrong, and the CB *is* able to forecast shocks and that CB inflation errors are not serially correlated. I am open to that. But at least a case needs to be made -- it's a non-obvious assertion that requires evidence, right? And even if this is true for a period of time, what would guarantee that it would continue to hold in the future?"

Here's how I look at it. There are lags in monetary policy. It takes time for the central bank to get information on shocks, react to shocks, and for inflation to react to monetary policy. If those shocks were unforecastable white noise, and if those shocks had an immediate impact on inflation, there would be no way the central bank could react to them, given those lags. All it could do would be to hold monetary policy constant, and hope for the best.

By asking this question, we are implicitly assuming that there is some information the bank can usefully react to. It can't keep inflation exactly constant, but it can do better than just holding monetary policy constant.

On your second comment, remember that when I talk about rational expectations here, I am only saying that the central bank *ought* to have rational expectations. I am saying nothing about other people's expectations, or whether their expectations even matter.

Put it another way. Unless you say that the central bank should hold the interest rate constant (and nobody says that) you must implicitly be assuming there is some information the central bank can usefully react to. And if it says, like the Bank of Canada says, that it is reacting to its information in such a way to keep its forecast of inflation 2 years in the future at 2%, we can test empirically whether it is reacting to that information in such a way to make its forecast rational.


Nick, I think the main mistake in your argumentation is right in the bold paragraph of your article:

"Everything ought to look useless by that test, if the bank is doing it right."

Why do you assume that the central bank is "doing it right"? Why do you assume that it and the rest of the economy is operating under "rational expectations"?

Yes, in an ideal world you would be correct to point out that iff the actions of a CB constitute a Markov process then there can be no forcasting performed on the observed time series - because this is the very definition of a Markov process.

Do you claim that we live in an ideal world?

Do you claim that CBs are employing policy action that is a Markov process?

The article you are criticising is making no such assumption: it simply states, by observing the real world, that if a CB wants to improve its targeting accuracy in this real world then it should go from headline inflation to core.

That is a very simple observation of reality.

How is your information-free statement that in essence says that a Markovian process is a Markovian process relevant to that observation and to the resulting discussion?

White Rabbit: "Why do you assume that it and the rest of the economy is operating under "rational expectations"?"

I make no assumptions whatsoever about whether the rest of the economy has rational expectations.

I am *not* asserting that the bank has rational expectations.

I am asserting the normative claim that the central bank *ought* to have rational expectations.

What this post is about is how we *interpret* a zero or non-zero correlation between indicator and (future) target variable.

A zero correlation does not tell us that the indicator is useless. It tells us only that the bank is responding to that indicator rationally (which might include ignoring it.

A non-zero correlation tells us only that the bank does not have rational expectations. It does not tell us that the indicator is useful.

Perhaps a simple (non-numerical) example will help.

Image a one dimensional table with a ball rolling on it. The CB wants to keep the ball at the origin. If the ball is to the left, it tilts the table so the ball slides back to zero. If the ball is to the right, it tilts the table in the opposite direction.

If we are to make a guess as to where the ball will be 2 years from now, we would guess it would be at the origin.

But, if we know that the ball will be to the left of the origin in 2 years, then we would rationally guess that 2 years + 1 day from now, it will also be to the left. Similarly, conditional on the ball being to the right of the origin in 2 years, we would rationally expect it to continue to be to the right of the origin in 2 years + 1 day.

The errors are serially correlated, even though the mean of the errors is zero.

And this would be true for any control that has "inertia" -- i.e. for any situation in which the energy or effort needed to push the ball back to the origin from a displacement away from the origin goes to infinity as the time allotted to push the ball back to the origin goes to zero.

I think it's reasonable to assume that inflation has this inertia property.

*everything* has this property, so you _always_ get serial correlation of errors in any controlled process.

God, this is *really* frustrating. Non-stupid people keep on not getting my simple point!

Let me try a simple example:

Assume the economy is P(t+1) = aZ(t) + bM(t) + S(t+1) where P is inflation, Z is some indicator, M is monetary policy, and S is a white noise shock, and a and b are fixed parameters. In this model, Z is useful iff a is not zero.

Let the central bank follow a reaction function M(t)=-R.Z(t)

If R=a/b, then P(t+1) = S(t+1) and there is zero correlation between P(t+1) and I(t). The feds will say that Z is useless, even when it might not be.

If R is less than a/b, then there is a positive correlation between P(t+1) and I(t). The feds say Z is useful, even when it might not be.

RSJ: (my above comment was in response to WR).

I like your example.

Except: "But, if we know that the ball will be to the left of the origin in 2 years, then we would rationally guess that 2 years + 1 day from now, it will also be to the left. Similarly, conditional on the ball being to the right of the origin in 2 years, we would rationally expect it to continue to be to the right of the origin in 2 years + 1 day."

No we wouldn't. Not if the bank tilts the table to get the ball back to zero, on average, at a 2 years horizon. The bank of Canada does not say it will act to ensure that inflation approaches 2% asymptotically. It says 2 years.

The errors will not be AR(1). They will be MA(23). (That's 23 months).

"Unless you say that the central bank should hold the interest rate constant (and nobody says that) you must implicitly be assuming there is some information the central bank can usefully react to."

Even in a world where monetary policy really can control (not merely influence) inflation, a CB would recognize that there is value in interest rate stability. Large random jumps in interest rates with no smoothing would not be the optimal policy.

RSJ: Yep. You are assuming that "serial correlation" of the bank's forecast errors means an AR process, which only dies away as time goes to infinity. But it will be an MA process, not an AR process. If it's an MA(23) process, then the 24 month ahead expectation will be zero.

Max: agreed. That is presumably one reason why central banks try to get inflation back to target in 2 years, not next month.

RSJ: just to clarify. The shocks hitting the economy may well be an AR process. But if the bank is targeting inflation at a 24 month horizon, the bank's forecast errors will be MA(23).

OK, I'm not making any assumptions about the shocks -- there is no reason to believe that the *shocks* are an AR or MA process.

Without a central bank, if there were only shocks -- the price level would be undefined. Therefore you cannot say anything about the shocks.

What I am saying is that even though you know that the expectation (e.g. mean) of inflation in 2 years will be 2%, nevertheless you also know that the probability of inflation being *exactly* 2% is zero.

Inflation will always be above or below 2%. And conditional on inflation being above target, you also know that the next day, it is likely to remain above target. And conditional on inflation being below target, the next day it is likely to remain below target -- with probability 1.

With probability 1, the errors, or difference between actual and target inflation -- are going to be serially correlated over any non-zero interval of time.

"Without a central bank, if there were only shocks -- the price level would be undefined"

RSJ, seriously, was there no price level in the US before 1913?

Why do you so frequently undermine your own agrument with a blatantly false statement that doesn't appear to be essential to your point anyway?

Nick,

I don't think these guys are missing your point, I think that you're missing theirs. Seems to me you have an entirely different underlying model to theirs.

In particular, I think the difference is that the engineers are assuming that the mapping from control variable to objective is itself noisy.

As far as I can tell your viewing the world through the lens of MV=PY with shocks to V. Thus, in your view if a velocity shock hits the bank tries to offset it. In the absence of a further shock to V they are always able to do this, if they're doing it right. Further, if they're doing it right they should optimally predict V (to the extent this is possible) and so only miss if the change in V is truly a shock. Is that accurate?

Thus, only unpredictable shocks to V cause them to miss and you get your result.

The engineers probably have in mind something like PY = MV + noise, in this case even if the CB does all it can in offsetting a velocity shock, it is not the case that in the absence of a new shock to V it hits the target with probability one.

If we take V as the inverse of the demand for real balances, as it is supposed to be, and not simply PY/M, then I think the engineers are correct here.

"P(t+1) = aZ(t) + bM(t) + S(t+1)". Actually you probably meant P(t) = aZ(t) + bM(t) + S(t+1)

Look very closely at what you've formulated here Nick. Your system determines its response at time t by looking at input from time t+1. Your system is not causal. In fact it is acausal. At t=0

You have violated causality.

In the real world, you can only definitely know things that happened now, or in the past. You can't know the future. You can guess at the future, but not know it definitively. That's causality. Now if you wish to use a non-zero correlation between core and total inflation, Bayes Rule is a good place to start. But in order to usefully construct a system that looks into the future, you have to start using probabilistic methods like Bayes Rule.

It may be simple, but when it comes to control systems the math is deceptively complex. There is a great deal of complexity in simple-looking equations and your first instinct is often wrong. You have to derive then analyze.

RSJ: let me re-state, to maybe resolve our differences here.

Assume the shocks to inflation when the central bank is optimally targeting 2% inflation at a 24 month horizon, so that E[P(t+24)/I(t)]=2%, are an MA(23) process. (I assert that they will be, or rather, I assert that they will be an MA process of up to 23 months). In math, P(t) = 2% + shock(t) where shock(t) is MA(23).

The expectation at time t of inflation at time t+23 will (almost certainly) not equal 2%. But the expectation at time t of inflation at time t+24 will nevertheless equal 2%.

(My statement above does not contradict what you were saying, if I re-interpret you correctly.)

Adam: I don't really see the difference.

The way I look at it is this: assuming the bank targets 2% inflation at a 24 month horizon, then it sets E[P(t+24)/I(t)]= 2%, where I(t) includes the instrument, and the vector of indicators, plus lagged values of indicators and instrument. Then I impose the standard bit of econometrics that P(t+24)=E[P(t+24)/I(t)]+S(t+24)=2%+S(t+24) where S(t+24) must be unforecastable (uncorrelated, orthogonal) with respect to I(t). And S(t) must be an MA(23) process (or less than 23).

If the bank expects P(t+24) to be above 2% it should tighten monetary policy, and if less than 2%, it should loosen monetary policy.

If monetary policy has no effect on P(t+24) then this won't work. If the bank has zero relevant information this won't work. If the distributed lag structure is ugly enough to cause ever-increasing oscillations in the monetary policy instrument this won't work. Under those circumstances, inflation targeting at t+24 is impossible. Otherwise, it's just econometrics.

I normally tell the story in a New Keynesian way, assuming monetary policy instrument is R(t) rather than M(t). But it makes no difference to the econometrics.

Determinant:" "P(t+1) = aZ(t) + bM(t) + S(t+1)". Actually you probably meant P(t) = aZ(t) + bM(t) + S(t+1)"

No, I meant "P(t+1) = aZ(t) + bM(t) + S(t+1)". Monetary policy affects inflation with a lag. Some of the stuff the bank observes (Z) affects inflation with a lag, so the bank can counteract it. And other stuff S affects inflation too quickly for the bank to counteract it.

Then you canonically have a non-causal and therefore unstable system. You may try to implement it with Bayesian methods and core inflation ought to make a good prior for Bayesian estimation, but your equation is not a well-behaved system.

Which bring me to my next point. The Fed's study is extremely suggestive of using core inflation as a Bayesian prior, surely as an economist you would agree?

Determinant: You lost me. Are you telling me that causes can't have lagged effects? Are you telling me that lags always mean the system is unstable? Regardless of the central bank reaction function?

My economists intuition tells me that core is probably a useful indicator for the central bank to look at. In the sense that it *would* be a good Bayesian prior of (say) 2-year ahead inflation *if* the central bank didn't react strongly when core changes. But I want empirical evidence to back up my intuition. And the Fed's study is no help, because the Fed is probably reacting to core, as well as to total, so I can't tell what *would* happen if the Fed *didn't* respond to core. The empirical work I did do for Canada, 10 years ago, using the method outlined here, did suggest that core might be useful, IIRC, but my data series was too short to get strong results. And that was for Canada, not the US.

Adam P,

I was referring to the steady state being defined. Of course, there are always prices. But prior to 1913, if you looked at the price level, it veered up and down and followed no observable AR pattern. Have you ever read "This time it's different"? They have long term price data, and while there is a general bias towards inflation over deflation, there doesn't seem to be any obvious characterization of the inflation rate as m + MA(N) for some N as low as 24 months. The "m" is what is undefined.

Which addresses my point to Nick..

Suppose, without a central bank, that the price level is m + MA(infinity), or m + MA(very large number). Basically you cannot determine m with any confidence. But the shocks are i.i.d.

With a central bank, inflation is mean-reverting, due to the interventions in response to (and as you would argue, in anticipation of) the shocks. As the CB shocks the economy in the opposite direction, it can reduce the effective period and cause the errors to be a MA(23) process.

However, the terms are not i.i.d., because they include interventions in response to the shocks. The interventions are not distributed independently of the shocks. Now your conclusions about S no longer follow.

Try this definition of causality:

Causal System:
The output at any time t depend on the past or current input and for the output depencency it depends on the past output only.

Or put it another way, for a non-causal the excitation of the system starts before t=0. Normally we say that if y(0) != 0 the system is not causal.

Second, you never express something as y(t+T). y is the dependent variable. Lags are expressed in the independent side of the equation.

Let's rephrase.

P(t) = aZ(t-1) + bM(t-1) + S(t)

"Monetary policy affects inflation with a lag."

Further, you always want P(t), current inflation, to be K%. So

K% = aZ(t-1) + bM(t-1) +S(t) where S(t) is a random variable representing the shock.

Rearranging, you want K%-S(t) = aZ(t-1) + bM(t-1). You want the indicator and monetary policy at t-1 to anticipate the shock at T. Remember definition 1? This contradicts it since S(t) is a random variable. You want your indicator and monetary policy together to anticipate a random variable.

Rearranging further, you said Z(t) is useful if and only if it is not zero. You also defined monetary policy as M(t)=-R*Z(t)

So, K%-S(t) = aZ(t-1) - b(-R*Z(t-1))

K%-S(t) = (a+Rb)Z(t-1)

Which means the Indicator has to indicate S(t) = K - (a+Rb)Z(t-1). Alternatively, Z(t) = S(t+1)/(K-a-Rb)

This is the causality problem right here. Your indicator isn't causal, it isn't reacting to any past or present physical inputs, it is anticipating future shocks. Now as I said you can *estimate* to determine you indicator, but your posts always seem to want it be behave perfectly. You can't physically realize this indicator perfectly. You can use approximations and estimates, Bayes Rule being an excellent place to start, but us engineers have a horrible time when you economists just assume that the indicator will be perfect and then get angry when it isn't. We say "of course it's not perfect, what do you expect when you have a look-ahead system"

Determinant:

1. Your P(t) = aZ(t-1) + bM(t-1) + S(t) is exactly the same as my

P(t+1) = aZ(t) + bM(t) + S(t+1)

2. Since the central bank does not know S(t+1) at t, because S is iid, the best it can do is set M(t)=-(a/b)S(t). If it does this, P(t) = S(t), which is the best it can do to keep inflation as close as possible to the target (assumed 0%).

I know, that's why I reformulated it to follow conventional notation. It makes analysis easier and it's what I'm used to. Independent variables on the right, dependent variables on the left, lags shown on the right not the left.

Right, so the best control a central bank can achieve is total inflation equal to the random shock. Not the K% target you enunciated. Yet somehow a central bank is supposed to analyze all information and react in such a way that this is achieved. How that is supposed to happen with variables available right now is the question that many in this thread, including myself, are having trouble with.

Again, calibrating the parameters in your indicator variable Z(t) = S(t+1)/(K-a-Rb) with Bayesian methods related to core inflation input seems a good a place as any to start, but I don`t see how you can justify the strenuous objection you outlined in your OP based on the system you have here.

I don't get this. What Nick is saying ought to be totally uncontroversial in principle. The amount by which the CB will miss will be unpredictable, conditional on the information available to the CB 24 months ahead of the measurement date. If this is not true, then either 1) they aren't really trying or 2) they don't have the means to affect it. I argued above that there is plenty of evidence for 1. I.e. policy changes are serially autocorrelated (like ratings) which is strong evidence that they are not taking all available information into account; and secondly that the CB shouldn't really be trying to totally neutralize shocks because of inertial effects: if they correct shocks back to zero in finite time, then they will overshoot. If inflation is a second order system, they should aim to damp shocks like a critically damped spring, i.e. asymptotically. If the system was first, instead of second order, they could kill shocks arbitrarily fast by moving rates arbitrarily much, in which case they wouldn't need 24 months. The reason they don't do this is that they they know that higher order terms can cause insane oscillations, as you point out, Nick. And if they know about higher order terms, then they can't seriously mean 24 months, either. Asymptotically, at the highest rate possible, is the only coherent policy.

But to argue for 2), they don't have the means, doesn't make any sense to me. If they really wanted to kill every shock with a 24 month lag, then (ignoring the ZIRB) they are fully empowered to do so *without predictable error*, by moving rates arbitrarily much. And even adding uncertainty in the knowledge of the control function is not an excuse to miss in *predictable* ways.

Nick, I re-read the macroblog post (again) and it's more clear to me that they are not necessarily using core as a predictor. Maybe they are but all I'm getting is that they are stating that core provides better "controllability" than looking at headline. They do not explicitly state they are using core as a predictive indicator to formulate current monetary policy, only that it validates using core as the sole feedback measurement (even though they use different but similar method) ex post.

Now if the Feds are using some Bayesian Witchsmeller mumbo jumbo to forecast future core based on... core... within a closed loop system well, I'll send over the ghost of Nyquist to slap them upside the head. More likely, though, they are looking backwards, not forwards.

Nick, I'm sure you are tired of repeating yourself but please correct me if I am wrong here. Your argument rests on the following assumptions:

1. The CB behaves rationally (nothing wrong with that)

2. At time t, the CB is able to optimally exploit an information set so as to set its policy in accordance with achieving an inflation target at t+h (2% at the 2 year horizon in Canada)

So, if the CB is not able to respond optimally to the information set there may be deviations from target that one can forecast. Now, it is not hard to imagine that the CB's response to its information set is imperfect (despite rational behavior), especially given the uncertainty inherent in unobservable measures such as the output gap that the CB is known to follow. Furthermore, your argument only holds for the horizon at which the CB wishes to achieve the target. Does that mean that even if your case holds, core could still be useful for forecasting total inflation at other horizons? Say, 12 or 36 months rather than 24 months.

K,

So if mean{X{t}} = 0, then X{t} is i.i.d.? There are no serial correlations?

Mik: that's very close.

"Does that mean that even if your case holds, core could still be useful for forecasting total inflation at other horizons? Say, 12 or 36 months rather than 24 months."

If h=24, then core (or anything else) could (perhaps) still be useful in forecasting total inflation at *less* than 24 months ahead, but it could *not* be useful in forecasting inflation at *more* than 24 months ahead.

Here's the explanation of why it would not be useful at (say) 36 months: remember that the bank's information set at t=12 includes core at t=0 (assuming the bank does not forget old data). And since the bank at t=12 is targeting total inflation at t=36, inflation at t=36 must be unforecastable from the bank's information set at t=12, which includes core at t=0.

"Now, it is not hard to imagine that the CB's response to its information set is imperfect (despite rational behavior), especially given the uncertainty inherent in unobservable measures such as the output gap that the CB is known to follow."

Strictly speaking, when we test whether the bank has been responding rationally to its information, we should be using *real time* data (i.e. unrevised data). Measures like the output gap are often revised, and Simon van Norden has shown that revised measures of the output gap often differ a lot from real time measures of the output gap. If we used final revised data, it would be a bit unfair, because that final revised data was not available to the bank at the time.

(There is also the problem inherent in any test of rational expectations when we use the whole data series for test for correlations, when the bank has a smaller data set, and the structure of the economy may be changing over time. Strictly speaking we should use some sort of rolling sample method to do this test of whether the bank is responding rationally.)

Which is unrealistic in any event. I much prefer a Bayesian interpretation whereby the probability expressed the given confidence that an outcome will occur given a specified prior, rather than frequency.

Bayesian interpretations are both much more intuitive and much more realistic.

C'mon, admit the Central Bank isn't pefect and be Bayesian. Become One with the Bayesians. :mad cackle:

Determinant: let me give a good Bayesian gloss. The central bank, as a good bayesian, looks back over its past performance looking for systematic mistakes, so it can adjust its decision rule accordingly. It will look for non-zero correlations, and when it finds some it will adjust the weight it places on those indicators.

I appreciate the response. My output gap example was not a good one. As you point out, the CB's optimal response can only be seen in terms of what is available to it at the time. I guess more broadly speaking I meant it is probably unrealistic to view monetary policy decisions as optimal responses to an information set, despite the best efforts of a CB to react optimally to the information at hand.

All of the core inflation measures currently used at the BoC have been shown to have good forecasting properties for total inflation. On the other hand, simple contemporaneous correlations with the policy rate vary considerably depending on which core measure is used. While simplistic, in this framework your conclusions could range from 1) They should pay more attention to core to 2) They are paying too much attention to core. Depends on which core measure is used!

RSJ: I'm not saying anything about X(t). It doesn't matter what X(t) is, since the relevant process that we are discussing is E_t[X(T)] which is a martingale. If you calculate an expectation E_t[X(T)] based on all available information at time t, then you can't improve that calculation by taking into the account some random variable whose value is in the information set at the time of the expectation. It's obvious, but nevertheless has a name in probability theory: the law of iterated expectations.

K,

No, the E_t(X_t) is *not* a martingale, and I challenge you to prove that it is. You will find that you are assuming your conclusion...

but, RSJ, for T > t, E_t[X(T)] *is* a martingale and that is what K asserted.

"for T > t, E_t[X(T)] *is* a martingale and that is what K asserted."

I guess the question is what *empirical* evidence, not just assertion, is there for the "stochastic" process modelling interest rate adjustment by a CB to be a martingale ?

Sure, we can assert anything, but based on what evidence ?

Aha! I now finally understand K's comment @04.47am (with help from Adam)! Holding T constant, as t increases the expectation of P(T) at time t will follow a martingale. Immediate implication of the law of iterated projections/expectations. Today's expectation of tomorrow's expectation of the day after tomorrow's rainfall = today's expectation of the day after tomorrow's rainfall. Not an empirical assertion at all. Doesn't even assume the CB is targeting inflation. Damn, that took me a while. (my brain was thinking that T=t+h, where h is a constant, rather than T being a constant.)

Nick:

Do you mean Doob martingale construction (E_t[X(T)]) ? I am not sure how helpful the construction is by itself here as it can be made from any arbitrary random variable with bounded expectations.

Nick:  Yup.  But the martingale property of conditional expectations was not my central point; it's just a nice way to think about the problem.  The central point was the (bleedingly obvious) fact that you can't improve on the expected value of a process by taking into account information that was already available to you when you first computed the expectation.  This is also the result of iterated projection. I.e. when we say that the CB could have done a better job of computing the expected future value of inflation, we mean *given all the information they had then*.  But that information *obviously* includes the value of core inflation (and every other bit of the state of the world), so adding core inflation to the information shouldn't improve their estimate. It's seems like a silly point when you break it down, but for some reason it's not obvious on the face of it, which is why your post was clearly worth writing.

vjk: I just Googled "Doob martingale". First time I had heard the term. I *think* that's what I mean.

K: Yes. My whole point does seem bleedingly obvious, almost trivial, when you state it like that. Which is why I get frustrated when people don't get it!

Everyone: if you still don't get it, then re-read K's comment above (@11.44) slowly and carefully.

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad