« Provincial Government Health Spending: The Force Awakens? | Main | So What Happens in the Next Recession? »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Exactly! To me the LT part of NGDPLT has always been much more convincing than the GDP part. Otherwise expectations are obviously all over the place and forward guidance untrustworthy and unreliable (unless it hints at a preference for a particular future level). To me the question is how can this fail to be a well established majority consensus amongst economists and how has it not already been implemented a long time ago by central banks?

Benoit: thanks! Yep, I set aside the NGDP level target vs price level target for this post, because I wanted to concentrate on the Level Target bit.

As far as I can tell, lags are under-modelled in macro theory. Central bankers know about lags, and know that uncertainty and forecasting only matter because of lags. (Any fool driver can keep the speedometer at 100 if there are no lags.) But they tend to get ignored.

Inflation results are not reliable except ex post a bit, hence they suffer the Milton thermostat problem, a high uncertainty, or inaccuracy. That is why we see the expectations always appearing because of the lags. Price targets lets the market sort out inflation from growth. Inflation is a bad instrument, it is kind of meaningless. Price variation is better.

> Would your answer change if Rip van Winkle fell asleep for only 10 years, or 1 year, or less? Why?

What if Rip sleeps for 700 years? Would we expect then the price-level world would have an instantaneous 1,000-fold increase in prices alongside a steady 1% deflation thereafter?

Unfortunately, I don't think we can draw good conclusions from this example without as much as a token model of non-equilibrium dynamics in the macroeconomy. You of course accurately show that the price-level target gives a future nominal anchor and hence provides for a single equilibrium path, but for that to be a relevant conclusion we also need to show that it's an attracting equilibrium.

Majro: "Unfortunately, I don't think we can draw good conclusions from this example without as much as a token model of non-equilibrium dynamics in the macroeconomy."

I *think* I'm right in saying: if you assume a Calvo Phillips Curve (very standard in simple NK models), then for any expectation about P(70) you can solve for a time-path that leads to it. I was playing around with other Phillips Curve models, and I think you get the same result.

"What if Rip sleeps for 700 years? Would we expect then the price-level world would have an instantaneous 1,000-fold increase in prices alongside a steady 1% deflation thereafter?"

Yep, if prices are perfectly flexible (with the AD equation from a simple NK model).

Nick,

"But central banks do not and cannot respond immediately to shocks that hit the economy."

Gotta disagree on the "cannot respond immediately" part of this.
See below on information transmission limitations vs. computational time.
The limitation is not on the central bank, the limitation is on people being made aware of central bank policy changes.

Should central banks respond to every shock that hits an economy - I would say no.

Will the actions of central banks be able to limit the damage done by every negative shock an economy could possibly encounter - I would say no again.

Can a central bank respond to any shock of their choosing in any way they see fit?
Physically, there is nothing to stop them. They could buy a computer, hire a programmer, and have the computer adjust interest rates any way they see fit.
The only constraint is that they serve limited terms and can be hired / fired and how quickly their stated policy is disseminated through an economy.

"Would your answer change if Rip van Winkle fell asleep for only 10 years, or 1 year, or less? Why?"

First my answer (inflation rate targeting vs. price level targeting), it really doesn't matter a whole lot when modern technology is considered. Modern computer processors operate in the 3-4 Ghz range, and so a computer controlled central bank could make interest rate changes much more quickly than prices can change across an entire economy / world. In fact the transmission of information is a limiting factor.

Speed of light = 186282 miles per second
Distance from New York to Tokyo = 6737 miles
Minimum time for information to travel (New York to Tokyo) = 6737/186282 = 0.036 seconds

An interest rate change can be made in
1 / 4 Gigahertz = 1 / (4 * 10-9) = 2.5 x 10^-10 seconds

And so the central bank could make about 145 million interest rate changes before the first one is reported in Tokyo.

> then for any expectation about P(70) you can solve for a time-path that leads to it.

But that pushes the dynamics problem one step further back, to how expectations are formed. Since you know P(70) with certainty, you're implicitly assuming that P(69) is known with certainty, and so on by induction back to the present day to give us the instantaneous price jump.

Consider instead the nonzero-noise variation, where we have both a one-time change in the real interest rate and small changes each period. The magnification implied by the distant nominal anchor would cause a considerable variance in the expected near-term path of prices, even if on average we still expect deflation. In such an environment, I am not sure that the "double prices, 1% deflation" equilibrium is learnable.

This may also come down to Rip's credibility and what people expect Rip to do if he were to wake up to price level wildly different from his target. Is there still an equilibrium if people expect with 95% certainty that Rip will maintain the price-level target and with 5% certainty that Rip will adopt the prevailing price level as a new target?

Majro: "But that pushes the dynamics problem one step further back, to how expectations are formed. Since you know P(70) with certainty, you're implicitly assuming that P(69) is known with certainty, and so on by induction back to the present day to give us the instantaneous price jump."

Yep. That's how economists (and game theorists) usually do it. But yes, "learnability" matters. But learning when you know the end-point is a lot easier than when you don't.

"This may also come down to Rip's credibility and what people expect Rip to do if he were to wake up to price level wildly different from his target. Is there still an equilibrium if people expect with 95% certainty that Rip will maintain the price-level target and with 5% certainty that Rip will adopt the prevailing price level as a new target?"

I *think* there is. Unless the AD function is *very* non-linear (in a particular direction).

Frank: "And so the central bank could make about 145 million interest rate changes before the first one is reported in Tokyo."

They could, but that does not address Nick's original statement. In this hypothetical scenario, what is the central bank responding to? That is to say, what type of economic observation are they using as input that can deliver samples at a rate of 4GHz ?

Nick's original statement: "But central banks do not and cannot respond immediately to shocks that hit the economy."

The time value that matters is the loop delay. Start from the announcement of a step change in central bank interest rates and call that time zero, from there the information propagates out into the economy, people react, change their behaviour, then gradually statistics are collected, perhaps wages change, perhaps some people lose their jobs, other people get new jobs, around it goes until the effect of that step change can be seen by the central bank. At that stage the bank can evaluate what happened with the first interest rate change, and process that data to decide what to do with the next step. That's the full loop.

Problem is that in a large economy there are many paths through the loop, some response might be seen in only a few months, other effects might take years.

The story is complicated if people doubt that the CB will have the gumption to actually carry out the policy Rip announced before dozing off. While I agree that a PL target or an NGDP target would be better than an inflation target it does not really address the political resistance (it's not just lagged adjustment to shocks!) that has caused the Fed to undershoot its supposed inflation target for a 'coon's age.

Tel,

"That is to say, what type of economic observation are they using as input that can deliver samples at a rate of 4GHz"

Remember the central bank is one entity while the markets are made of literally millions / billions of entities.
The sampling rate on a single market participant can be much smaller when we go from serial communication to parallel communication.
Serial communication at 4 GHz is equivalent to million bit parallel communication at 4 kHZ.
So the central bank gets it's price level information from a million market participants at 4 kHZ each instead of one market participant at 4 GHz.

"The time value that matters is the loop delay."

When we are comparing proportional to derivative control, there are two time values to consider. The time it takes the central bank to arrive at an interest rate decision (dt) (In Nick's example this can be quite large - 70 years) and the time it takes the interest rate decision to permeate through an economy (dT). Nick doesn't specify how long after Rip Van Winkle makes an interest rate change that members of an economy hear about it.

Example A: If our Van Winkle is making interest rate decisions for a planet in the Alph Centauri system (4.37 light years away) then decision delay is much larger than propagation delay.
Example B: If Van Winkle is instead making interest rate decisions for a planet in the Andromeda Galaxy (2.54 million light years away) then decision delay is much less than propagation delay.

i(t+dt) = Interest Rate set by central bank after a decision delay of dt
r(t+dt+dT) = Interest Rate seen by market after a decision delay of dt and a propagation delay of dT

For proportional control (Price Level Targeting) we are only concerned with the total delay (central bank decision delay plus propagation delay).
r(t+dt+dT) = Kp * p(t)

For derivative control (Inflation Rate Targeting) we are concerned with both the total delay (central bank decision delay plus propagation delay) AND the propagation delay by itself.

r(t+dt+dT) = Kd * [ p(t) - p(t-dT) / dT ]

Notice that when calculating the amount of derivative control, we look at the difference in price level from p(t) to p(t-dT) NOT FROM p(t) to p(t-dt-DT).
We want to measure the effects of the last interest rate change from time t to time t-dT when picking a new r(t+dt+dT) and not include price level changes that occurred while an interest rate decision was being reached from time t-dT to t-dt-DT.

If the decision delay is significantly less than the propagation delay (see example B), then inflation rate targeting can work well.
If the decision delay is significantly larger than the propagation delay (see example A), then inflation rate targeting will miss a lot of price level changes and you are left with using price level targeting.

Tel,

P. S.

And so to answer Nick's question:
"Would your answer change if Rip van Winkle fell asleep for only 10 years, or 1 year, or less? Why?"

My answer is dependent on how Rip van Winkle's decision delay (10 years, 1 year, less) compares to the propagation delay he will see getting his interest rate decision out.

If Rip Van Winkle takes 10 years (decision delay) to make an interest rate decision, but that decision takes about 2.5 million years (propagation delay) to reach the markets, then inflation rate targeting works fine.

If instead Rip Van Winkle takes 10 years (decision delay) to make an interest rate decision, but that decision only takes a month (propagation delay) to reach the markets, then he will be left with targeting the price level only.

"So the central bank gets it's price level information from a million market participants at 4 kHZ each instead of one market participant at 4 GHz."

I'm not convinced... do you personally generate new economic decisions at a rate of 4kHz?

I probably go to the supermarket an average of once every two or three days, and perhaps buy between 10 and 20 items (ballpark figures) that would be a sample rate at an order of magnitude perhaps 100 micro-Hertz. Admittedly I'm not a hard core shopper, but you see what I'm getting at.

What's more, is it likely that the central bank would change interest rates based on a change in my shopping profile? Indeed, how long would be the absolute minimum required to detect a statistically significant change in my shopping profile? Having 6 months data from me will only give you at most 2000 sample points (probably fewer) and those would contain a bit of random noise at least. Given that you have a whole nation of people, yes there's more data points, but also there's more noise and since the Central Bank can only operate on overall statistical profiling I would argue that you need a lot of inertia just in the basic observation process before you have anything that could seriously be called a macro-economic signal.

That's the thing about macro, it's big!

"When we are comparing proportional to derivative control, there are two time values to consider. The time it takes the central bank to arrive at an interest rate decision (dt) (In Nick's example this can be quite large - 70 years) and the time it takes the interest rate decision to permeate through an economy (dT). Nick doesn't specify how long after Rip Van Winkle makes an interest rate change that members of an economy hear about it."
There's also the delay required to collect sufficient data from the system that you are confident you have a meaningful measurement (as outlined above).

However, I still maintain that it's the loop delay as a whole which is the important parameter (i.e. the sum of all the delays, observation, decision making, and propagation, and anything else you can think of). Here's my proof. Suppose the decision making delay was improved somehow and got three days shorter, but simultaneously the propagation delay got three days longer... how would the system as a whole react any differently? And why would it react differently?

FWIW I strongly suspect that with delays on the million year timescale, there's no possible Central Bank target which would work... the market would have plenty of time to route around it, and would probably have no choice but to do so.

Tel,

"I'm not convinced... do you personally generate new economic decisions at a rate of 4kHz?"

Okay, the population of the United States is over 300 million people.
In Canada the population is 36 million. In China the population is 1.4 billion.
The population of some economy in deep space could be anything, 5 billion, 1 trillion, etc.

I was only using a million as an example to demonstrate that the sampling rate on an individual market participant can be significantly less than the sampling rate the central bank uses for the whole economy. Meaning, even if the central bank is sampling at 4GHz, this doesn't mean that each market participant must deliver data to the central bank at the same 4GHz rate.

The 4GHz limitation isn't important for sampling.
It is important for computational time.
Once the central bank receives it's price data from the market, how quickly can it make an interest rate decision?
In Nick's example, this computational time (Van Winkle falls asleep / thinks about the next interest rate decision) is set to 70 years.


"What's more, is it likely that the central bank would change interest rates based on a change in my shopping profile?"
I think we are talking about what is possible with a central bank, not what the central bank should do or is likely to do.
Can the central bank change interest rates based on a change in your shopping profile - yes. Should that be the only data point that they use - probably not.

"There's also the delay required to collect sufficient data from the system that you are confident you have a meaningful measurement (as outlined above)."
Agreed.

"However, I still maintain that it's the loop delay as a whole which is the important parameter."
Loop delay (as a whole) is the only parameter to be concerned about with proportional control.
With derivative control, the relative sizes of the individual delays become important as well.

"FWIW I strongly suspect that with delays on the million year timescale, there's no possible Central Bank target which would work."
Probably, but I was trying to stick with Nick's model.
Is there a Central Bank target that would work if it makes interest rate decisions once every 70 years?

Tel,

I will give an example to try to make it clear.
dt = 5 seconds : This is the amount of time that it takes the central bank to reach an interest rate decision after it receives a price signal p(t)
dT = .1 seconds : This is the amount of time that it takes for the markets to receive an interest rate r(t) sent by the central bank

For proportional control - Interest rate decisions are received by the markets every 5.1 seconds with the intent to drive the price level.
i(t+5) = Kp * p(t) - Interest rate set by central bank
r(t+5.1) = Kp * p(t) - Interest rate seen by markets

For derivative control - Interest rate decisions are received by the markets every 5.1 seconds with the intent to drive the inflation rate.

Possibility #1
i(t+5) = Kd * [ ( p(t) - p(t - 0.1) / 0.1 ]
r(t+5.1) = Kd * [ ( p(t) - p(t - 0.1) / 0.1 ]

Here, the central bank is using price data at time t and time t - 0.1. It does not use price data from time t - 0.1 to time t - 5.1.
It should be apparent that the central bank is missing a lot of data when it is doing derivative control in this way ( dt >> dT ).

Possibility #2 - If instead the central bank uses the entire delay from time t to time t - 5.1 seconds for derivative control:

i(t+5) = Kd * [ [ p(t) - p(t - 5.1) ] / 5.1 ]
r(t+5.1) = Kd * [ [ p(t) - p(t - 5.1) ] / 5.1 ]

The central bank would be incorporating price level data p(t) newer than the last interest rate received by the markets and p(t-5.1) older than the last interest rate decision received by the markets. To do proper derivative control (and only derivative control) you need two inputs that are newer than the last controller output.

That is why long computational times are killers for derivative control - you either miss a lot of data or you incorporate a lot of old data.

Possibility #3 - If instead dt << DT (dt = 0.1, dT = 5) then for derivative control :
i(t+0.1) = Kd * [ ( p(t) - p(t - 5) / 5 ]
r(t+5.1) = Kd * [ ( p(t) - p(t - 5) / 5 ]

Notice there is significantly less missing data is the central bank computation (just the data from time t-5 to t-5.1) - Better than Possibility #1.

Notice also that the central bank is incorporating two points of price level data ( p(t) and p(t-5) ) that are both newer than the last interest rate decision received by the markets. - Better than Possibility #2.

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad