« A well-deserved tribute to Nick Rowe | Main | The 1 vs 3 Model of Quick Recessions vs Slow Recoveries »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Some comments from a mathie who dabbles in economics:

> But he cannot make his misses precisely zero (unless the wind never changes). Proof is by contradiction: if he never misses, then he never adjusts his sights (because misses are the only information he can respond to), so if the wind changes he will miss. Which is a contradiction.

This is false (that is, not a contradiction). If the wind changes predictably, then the shooter can learn and account for both the current and future winds. The shooter will adjust their aim even as all shots hit the target. The shooter doesn't even need to understand the wind itself, only the predictable effect the wind has on the aim.

You need the wind to be partially unpredictable, not just changeable.

> Now let us make a small change in the assumptions. Assume the misses Y(t) can only be observed with a 2-year lag.

With instant reactions and observations, you've implicitly defined a control problem for the central bank: it adjusts its target interest rate r(t) such that Y(t) stays close to its target. For any given control function, this sets up an (integro-) differential equation.

If you only observe misses with a lag, this instead replaces the problem with a delay differential equation. These are a hell of a lot more complicated, and they can turn otherwise stable systems into unstable systems if the controller tries to "fit the noise" rather than the signal.

When you add the spotter, you're introducing a forward model: the central bank takes its delayed observation of Y and propagates it forward based on how it expects its observed S to affect things. If the model is perfect, this does eliminate the delay; if the model is not perfect (such as if S is observed with some inaccuracy, or the model of how S affects Y is only a simplified model) then the central bank has to be more conservative. You can not say in general that the target shooter can make misses arbitrarily small.

Majro: Thanks for your comment.

"You need the wind to be partially unpredictable, not just changeable."

Hmm, you are right (I think). For example, if the wind steadily rose and fell in exactly the same pattern every day, the shooter could just raise and lower his aim according to the same pattern. So he would make a few misses on the first couple of days, but never miss again once he had figured out the pattern. OK, I should have said "change in an at least partially unpredictable way".

I'm not sure I properly understand your second point. Let me make 3 responses:

1. I assumed for simplicity the model is 100% accurate and everyone knows this. So the only uncertainty is in observing the wind. But I think I could relax that assumption, if I instead talked about uncertainty in the change in aim needed to offset changes in the wind, and the results would be equivalent, though harder to explain.

2. That aside, define e(t) as the error in estimating the wind, so actual wind(t) = estimated wind(t) + e(t). If e(t) is correlated with estimated wind (the anemometer speeds up or down at random) then I think you are right; the controller needs to be "conservative", putting less weight on the reported wind. But if it's a rational expectation of the wind, so e(t) is correlated with actual wind but not with estimated wind, then I think I disagree. (Which gets us into the whole question of whether market forecasts are informationally efficient rational expectations.)

3. That aside, I don't think we disagree. See my paragraph beginning "If the spotter can see the wind only imperfectly, the misses cannot be made arbitrarily small of course....."

> 3. That aside, I don't think we disagree. See my paragraph beginning "If the spotter can see the wind only imperfectly, the misses cannot be made arbitrarily small of course....."

I think this is an in-the-limit/at-the-limit thing.

As a physical analogy, balancing a meter stick end-on in your hand is fairly easy. Doing so when you can only see the stick with a 30-second time delay is probably impossible. If you had a perfectly accurate physical model (hand position, etc) and perfectly accurate information about the wind, then you could probably accomplish it, but with even ε inaccuracy I think it would again be impossible.

If the underlying observation/feedback system is unstable, then insight into a forcing parameter doesn't give us a stable equilibrium. I think what saves the economic system is that our "inflation headwinds" do correlate with the output gap, so they're an imperfect insight into the yet-to-be-observed Y(t).

Majro: "As a physical analogy, balancing a meter stick end-on in your hand is fairly easy. Doing so when you can only see the stick with a 30-second time delay is probably impossible."

Unfortunately, that physical analogy is much too close for comfort to a central bank trying to stabilise inflation using a nominal interest rate. (Which is why I would prefer a different way of doing monetary policy.) The only thing that makes that analogy slightly wrong is that there are leads as well as lags, because what the stick does now also depends on what the stick expects the person holding the stick will do in future.

Nick, Can you explain what you mean by "indeterminacy problem"? I've seen that term used in several different ways. I am familiar with an alleged "circularity problem" with using market forecasts to guide monetary policy---for instance, Woodford and Bernanke (1997, JMCB)

The circularity problem occurs when the central bank relies on market forecasts of the goal variable. If the policy is fully credible, the market will always forecast on-target inflation, or NGDP. But that provides no guidance as to where the central bank should set the policy instrument. It would not be rational for the market to forecast above or below target inflation, as they'd know that the central bank would react to that forecast and adjust policy, moving expectations back on target.

For this reason, you want the market forecast of the "wind", not the position where the bullet strikes the target. In monetary terms, you want a forecast of something like velocity (if the base is the policy instrument) not future NGDP. But the velocity you want is not NGDP/M, it's future NGDP over current M. And then you want the market forecast of that specific velocity measure, assuming policy is set to create on target NGDP.

Scott: glad you showed up. This is right up your street, and closely related to your previous posts.

I *think* that "indeterminacy problem" and "circularity problem" are just two different names for the same thing. But I think "indeterminacy problem" is a better name for my purposes in this post, because I am deliberately ducking the whole question of whether the market participants will get lazy and just keep on predicting 2% (or whatever the inflation or whatever target is). My "spotter" always has an independent forecast of where the shot will hit or miss, regardless of his beliefs about what the shooter is trying to do.

"For this reason, you want the market forecast of the "wind", not the position where the bullet strikes the target."

In this post, that's where I disagree. A spotter's forecast of where the bullet will hit is arbitrarily close to being as good as that same spotter's forecast of the wind.

Nick, I think you are right, but am not certain. What I was trying to say is that if Bernanke and Woodford were right about the circularity problem, then you'd want a forecast of velocity, or alternatively you'd want a forecast of the instrument setting consistent with on-target NGDP. I seem to recall that John Cochrane and Greg Mankiw didn't think the circularity problem was an actual barrier to a straightforward policy of relying on market forecasts of the goal variable, but I'm not certain, as it's been a long time since I read their comments.

One possible problem with carrying this over to monetary policy is that GDP is only reported every 3 months (in the US), so frequent adjustments based on misses are not easy to make.

As an aside, my current proposal for using market forecasts does not involve any circularity problem, rather I favor the Fed taking the long position on 3% NGDP growth contracts and the short position of 5% NGDP growth contracts. The Fed then promises to take those positions against any trader who thinks NGDP growth will be below 3% or above 5%. I call this the "guardrails approach", although another metaphor is the beeping noise a truck makes backing up, when it's about to hit something. A lot of trades at one of the two guardrails is a warning to the central bank. Imagine the trades occurring in October 2008, presumably mostly "shorts".

In this regime, the width of the guardrails is the amount of discretion. Right now the width is infinite, which is too much discretion. A width of 0% (i.e. the Fed buys and sells unlimited NGDP contracts right at 4%) gives the Fed very little discretion, unless it wants to absorb a large amount of risk. If the Fed chooses to avoid risk, it sets the policy instrument at a position where the long and short trades are roughly balanced. In that regime, the market is implicitly forecasting the instrument setting that leads to 4% NGDP growth. (And implicitly forecasting the "wind" as well.)

Scott: "What I was trying to say is that if Bernanke and Woodford were right about the circularity problem, then you'd want a forecast of velocity, or alternatively you'd want a forecast of the instrument setting consistent with on-target NGDP."

I agree, *if* they were right. But I think they are wrong (up to an arbitrarily small amount of rightness).

"One possible problem with carrying this over to monetary policy is that GDP is only reported every 3 months (in the US), so frequent adjustments based on misses are not easy to make."

True, but if the prediction market is open daily, or hourly, and we condition policy on the prediction market, I don't think that's a big problem. (We might need to use a weighted average of the 4 and 5 quarter ahead prediction markets however.)

"As an aside, my current proposal for using market forecasts does not involve any circularity problem, rather I favor the Fed taking the long position on 3% NGDP growth contracts and the short position of 5% NGDP growth contracts."

Interesting. I must have missed your post where you discussed that. I *think* that fits in with my post here, if we just imagine the Fed making the width arbitrarily close to zero. Not sure though.

Nick, In earlier papers I also talked about using a weighted average of two consecutive quarters, to get a point estimate for NGDP on a given day.

I sort of stopped thinking about the circularity problem once I decided on a proposal where it didn't even apply. My hunch is that you are right, because I can't imagine a scenario where the circularity problem would push the market price away from the rational expectations forecast; there would be $100 bills on the ground.

BTW, did you see this post:

https://www.econlib.org/the-wisdom-of-nick-rowe/

I called the blind target shooter the Canadian Interpretation.

He discussed it with respect to Milt Friedman thermostat problem. That blog post set me off in the direction of looking at market uncertainty driving pricing. The target shooter cannot shoot until his aim is better than inherent uncertainty. It is the parallel to Plank uncertainty in physics.

Scott: "Nick, In earlier papers I also talked about using a weighted average of two consecutive quarters, to get a point estimate for NGDP on a given day."

Ah, you were ahead of me on that.

" My hunch is that you are right, because I can't imagine a scenario where the circularity problem would push the market price away from the rational expectations forecast; there would be $100 bills on the ground."

It seems very similar to the efficient market paradox, or $100 bills paradox. If nobody ever bothers to pick them up, or do research, because they think markets are already 100% efficient, then there will be genuine $100 bills on the sidewalk. But if we add a small amount of "noise trading" (people accidentally dropping $100 bills and not noticing), and a spectrum of people having different costs of research (picking them up), we can maybe get an equilibrium which is close to what a simple EMH model would predict.

I saw your post on me. Thanks for that Scott! I feel a little bit of a fraud, because the "secret" of my success (such as it is) is that I've found one thing I can do relatively well, and have (mostly) stuck to doing it, but there are a whole load of things (in economics and outside) that are really good and useful things (like history, and empirical/econometric work) that I can't do very well at all. I used to feel guilty about that, till I remembered Ricardo, and that lots of others enjoy doing those things and are good at them.

Matt: I tried to read the Wiki on Planck's constant, but I'm afraid it went way over my head.

@Matthew Young:

> The target shooter cannot shoot until his aim is better than inherent uncertainty.

No, this is not the case. A learning process such as a Kalman Filter will still help correct things. It will just have to be quite conservative about its update, which has obvious implications for how much the error can be reduced.

@Nick Rowe:

> Unfortunately, that physical analogy is much too close for comfort to a central bank trying to stabilise inflation using a nominal interest rate.

That's in part why I picked the analogy, since the feedback is an important missing piece. A target on a range doesn't move when the shooter misses the bullseye.

> But if we add a small amount of "noise trading" (people accidentally dropping $100 bills and not noticing), and a spectrum of people having different costs of research (picking them up), we can maybe get an equilibrium which is close to what a simple EMH model would predict.

Once we add a time-delay, we have a full range of information we can ask the market. We know for certain what Y(t-2yrs) was and the EMH predicts that Y(t) == Y_target, but we can ask the market what Y(t-1month) was. That still provides very useful information to the central bank, but since this value has been fixed [the shot's already been taken] there is no further mixing with the target.

This is also an in-the-limit/at-the-limit thing. Y(t)->Y_target depends on the central bank's willingness an ability to do "whatever it takes" when it observes a departure, but the central bank has a finite amount of credibility and policy room. In the non-idealized case, we get a weird PDF with high kurtosis, which that we expect the CB to correct a small departure from its target but not necessarily a large departure. Or to use a driving metaphor, there are more people stopped on the 401 (at 0km/hr) in clear traffic than there are traveling 20 km/hr, since a car problem that impairs a driver's speed-credibility is more likely to result in the former.

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad