I know I'm right in saying that Milton Friedman's thermostat is an important idea that all economists ought to be aware of. And I'm pretty sure I'm right in asserting that almost all economists are unaware of this important idea. Am I wrong? Are you aware of this idea? Maybe under some other name??
Google seems to tell me I'm right. I'm the first link, which is really pathetic for such an important idea; the second is Friedman himself (pdf); and most of the rest on the first page are other bloggers, mostly Market Monetarists. But this idea has got nothing (in particular) to do with Monetarism. Compare that to what Google comes up with for another of Milton Friedman's important ideas. Scholarly articles, and its own Wikipedia page.
And every few days I come across an economist saying something that he would not have said if he were aware of Milton Friedman's thermostat. Today it was Casey Mulligan, but the class "economists who seem to be unaware of Milton Friedman's thermostat because they say something they would not say if they were aware of it" seems to me to cover lots of very different economists.
This really bugs me.
Milton Friedman's thermostat has got nothing to do with Monetarism, or even macroeconomics. Or rather, Milton Friedman's thermostat is an idea that has very broad application, and has nothing in particular to do with Monetarism or even macroeconomics. Or even economics.
If I had to categorise where this idea belongs, I would say it is an idea that belongs to applied econometrics. But, as far as I can see, I would say that applied econometricians are as unaware of this idea as any other economists, even though they have the greatest need to be aware of this idea.
And it's not even original to Milton Friedman. The first economics article I can find laying out the basic idea was by an Old Keynesian, Maurice Peston (gated), who applied it to fiscal policy, not monetary policy. Like most important ideas, it has probably been independently invented several times, by someone who realised they needed to invent it. (I invented it myself, and only after I had invented it did I learn that other people had previously invented it too). But calling it "Milton Friedman's thermostat" is a good name for the idea, because it's a good metaphor, and Friedman is the one who came up with that metaphor.
And it's not even a very complicated idea. You can explain the gist of it using words and simple examples.
But it's a really really important idea. Both theoretically important, and practically important.
So why does such an important idea need to keep on being reinvented? Why are (almost all) economists unaware of this idea?
It's not as though Milton Friedman were some no-name economist that everybody ignored. Every economist is very aware of lots of Milton Friedman's other ideas. Those other ideas are taught to all economics students. Why not this idea?
Here's the idea:
Everybody knows that if you press down on the gas pedal the car goes faster, other things equal, right? And everybody knows that if a car is going uphill the car goes slower, other things equal, right?
But suppose you were someone who didn't know those two things. And you were a passenger in a car watching the driver trying to keep a constant speed on a hilly road. You would see the gas pedal going up and down. You would see the car going downhill and uphill. But if the driver were skilled, and the car powerful enough, you would see the speed stay constant.
So, if you were simply looking at this particular "data generating process", you could easily conclude: "Look! The position of the gas pedal has no effect on the speed!"; and "Look! Whether the car is going uphill or downhill has no effect on the speed!"; and "All you guys who think that gas pedals and hills affect speed are wrong!"
And no, you can not get around this problem by doing a multivariate regression of speed on gas pedal and hill. That's because gas pedal and hill will be perfectly colinear. And no, you do not get around this problem simply by observing an unskilled driver who is unable to keep the speed perfectly constant. That's because what you are really estimating is the driver's forecast errors of the relationship between speed gas and hill, and not the true structural relationship between speed gas and hill. And it really bugs me that people who know a lot more econometrics than I do think that you can get around the problem this way, when you can't. And it bugs me even more that econometricians spend their time doing loads of really fancy stuff that I can't understand when so many of them don't seem to understand Milton Friedman's thermostat. Which they really need to understand.
If the driver is doing his job right, and correctly adjusting the gas pedal to the hills, you should find zero correlation between gas pedal and speed, and zero correlation between hills and speed. Any fluctuations in speed should be uncorrelated with anything the driver can see. They are the driver's forecast errors, because he can't see gusts of headwinds coming. And if you do find a correlation between gas pedal and speed, that correlation could go either way. A driver who over-estimates the power of his engine, or who under-estimates the effects of hills, will create a correlation between gas pedal and speed with the "wrong" sign. He presses the gas pedal down going uphill, but not enough, and the speed drops.
How could the passenger figure out if the gas pedal affected the speed of the car? Here's a couple of ideas:
1. Watch what happens on a really steep uphill bit of road. Watch what happens when the driver puts the pedal to the metal, and holds it there. Does the car slow down? If so, ironically, that confirms the theory that pressing down on the gas pedal causes the car to speed up! Because it means the driver knows he needs to press it down further to prevent the speed dropping, but can't. It's the exception that proves the rule. (Just in case it isn't obvious, that's a metaphor for the zero lower bound on nominal interest rates.)
2. Ask the driver. If the driver says that pressing the gas pedal down makes the car go faster, and if the driver says he wants to go at a constant 100kms/hr, and if you see the car going a roughly constant 100kms/hr, then you figure the driver is probably right. Even more so if you ask him to slow the car to 80kms/hr, and he says "OK", and then the car does slow to a roughly constant 80kms/hr. If the driver were wrong about the relation between gas pedal and speed, he wouldn't be able to do that, and it wouldn't happen, except by sheer fluke. (Just in case it isn't obvious, that's a metaphor for inflation targeting.)
3. Find a total idiot driver, who doesn't understand the relation between gas pedals and speed, and who makes random jabs at the gas pedal that you know for certain are uncorrelated to hills or anything else that might affect the car's speed, and then do a multivariate regression of speed on gas and hills. But you had better be damned sure you know those jabs at the gas pedal really are random, and uncorrelated with hills and stuff. Which means this can only work if you are certain that you know more about what is and is not a hill than the driver does. Or you are certain he's pressing the gas pedal according to the music playing on the radio. Or something that definitely isn't a hill. Are you really really sure your instrument isn't a hill, or correlated with hills? And if so, why doesn't the driver know this, and why does he jab at the gas pedal in time with that instrument? You had better have a very good answer to those questions. And no, Granger-Sims causality does not answer those questions, or even try to.
Why is this idea so important for economists to be aware of? Because economists look at correlations in the data. And a lot of correlations in the data are created by someone looking at some first thing, and adjusting some second thing in response to the first thing, in order to control some third thing. That someone could be a government, or a central bank, or a firm, or a person. And the first, second, and third things could be almost anything. And if you are unaware of what Milton Friedman's thermostat tells you about the correlation between those three things, you will misunderstand the correlations between those three things. And you will make some bad mistakes about lots of things.
Milton Friedman's thermostat is an idea that is both important and right. Why are (almost all) economists unaware of this idea?
Here are some of my old posts, where I either explain the idea, or use it to explain why economists who are unaware of this idea are making mistakes, or use it to explain how to do it properly:
An application to critique econometric methods for identifying monetary shocks.
An application to critique tests of core inflation as an indicator. [Update, a much clearer version.]
An application to properly test core inflation as an indicator. And again, more simply.
I've probably missed some.
Nick,
Thanks for reminder us about the econometric consequences of Friedman's thermostat. I must say, however, that the reason why economists ignore it is that any sensor with a control system gives a terrible idea of what decision making in general and public policy in particular is about. At the individual level, there are some situations in which it may make sense to rely on sensors (with or without control systems) --the thermostat is an excellent example of such situations. Many (most, almost all?) of our individual and collective decisions assume knowledge well beyond what any sensor can handle as well as a degree of discretion high enough to ignore it (do you think that one day robots will replace truck drivers, physicians, or economists?). In the 1960s many (macro)economists thought that their econometric models could provide all that knowledge so policy making could be reduced to process that knowledge and to adjust the instruments to achieve the optimal temperature politicians might have wanted. Thanks to Lucas and Nixon, we knew in the early 1970s that it was BS --you may check this by looking at the Ph.D. thesis written in the 60s and 70s, but please don't remind me of what I said in my thesis. We have been running aways from machines, have been dreaming about rules without understanding how to enforce them, and have settled for our discretion whenever we get power.
Posted by: E. Barandiaran | July 27, 2012 at 01:37 PM
Based on your driving metaphor, I would add a fourth method: find a place that has deceptive terrain, and observe what happens with drivers that aren't used to it. This is essentially the instrumental variables technique, I think. The deceptive features of the terrain are essentially observations of the errors in a locally inexperienced driver's estimate of the incline. For example, if you find a place with a hill that isn't visible until you're nearly on it, you should observe that locally inexperienced drivers slow down when they reach the hill, and this would be evidence that hills affect speed. Depending on the specific characteristics of the hill, you might also observe that their rate of deceleration declines after a certain point as they are going up the hill even if the incline remains constant. This would be evidence that the gas pedal affects speed.
Posted by: Andy Harless | July 27, 2012 at 01:59 PM
E Barandian: You may well be right that our ability to control isn't as good as we like to think it is. But I think there is an aspect to that directly related to this econometric problem: in order to control a system we first try to estimate the structural equations. But the data we use for that estimation was itself generated by a process of people trying to control things. (Which includes, but is not limited to, our predecessors. And if we are unaware of Milton Friedman's thermostat, and ignore that fact, we will estimate the structure wrong.
Andy. Good point. But I would think of that as a variant of my 3: find an idiot driver. Only in your example, the driver isn't really an idiot, just unfamiliar with new terrain. But again, you the econometrician must know for sure that you understand the terrain better than the driver. So you can identify his mistakes. That is a very big assumption.
Posted by: Nick Rowe | July 27, 2012 at 02:18 PM
Actually, you do what engineers do, try to do a small-signal analysis (i.e. a linear approximation. Use a small, repetitive deterministic stimulus) under various conditions, to see if you can create a consistent large-signal model. Also, it's interesting that economists study statistical dynamical systems with correlation techniques; the major discoveries of physics, i.e. thermodynamics and statistical mechanics, information theory, never used those techniques.
Posted by: jt | July 27, 2012 at 02:54 PM
jt: you are talking about experiments? If so, no problem. "Hey just lend me your economies for a few years. I'm going to do monetary policy at random, just to see what happens. A sample of 100 economies for 100 years should be big enough for my experiment. Tough on the lab rats, but well."
Posted by: Nick Rowe | July 27, 2012 at 03:09 PM
If only this wonderful car, hill and gas pedal analogy contained at least SOME talk of the driver veering off the correct road when he presses the gas pedal too much or too little, because he has no idea where the 300 million car passengers individually want to go, nor when they individually want to get there.
Friedman's thermostat analogy is a good tool to grasp the basic truisms that correlation does not imply causation (pirate attacks and global warming), and that a lack of correlation does not imply a lack of causation all else equal (gas pedal and speed). Friedman's thermostat tool should not, indeed cannot, be used as a tool to understand purposeful human behavior, the subject matter of economics.
-------------------
There is another thermostat analogy, as it pertains to monetary policy, that is rather tongue in cheek.
This is the analogy where a central bank is like an engineer dealing with a sealed boiler, with the caveat that the engineer doesn't know why the temperature inside the boiler rises to make the boiler explode. He can't pinpoint the cause of it because he's ideologically predisposed to never blaming his own actions. In his mind, he can only ever err on the side of "not doing enough" to prevent the boiler from overheating. He considers the overheating as an inexplicable phenomena somehow inherent in the nature of the boiler itself. Maybe the boiler has too many water molecules possessed by an evil animal spirit, or maybe the boiler itself has imperfections.
So when the boiler starts to overheat inexplicably, the engineer believes he can solve the problem by locally moving the thermostat needle to a lower temperature. As long as the thermostat needle rests within a stable range, then supposedly the boiler's pressure can be reduced and "stabilized."
All the while the engineer is using a flamethrower to move the thermostat needle, which heats up the boiler from without...
Posted by: Major_Freedom | July 27, 2012 at 03:26 PM
But we aren't observing a car going at a constant speed. We're observing a car that is speeding up and slowing down all the time.
So you need to assume everything in your post, *and* that there is some optimal speed that is changing all the time, which we cannot observe but the driver can.
Also, can't we observe the mechanism that is supposed to link the gas medal to the car's wheels. If a gauge near the gas tank shows that gas isn't flowing out any faster when the pedal is down, then we can be pretty sure it's not changing the speed of the car, no matter what the driver's intentions. I would argue that the lack of a strong short-term relationship between policy rates and long rates is the equivalent of that negative reading.
Posted by: JW Mason | July 27, 2012 at 03:59 PM
Why aren't more people talking about the thermostat? Well, the intellectual content is trivial, but the use of this approach is to deny the need to understand how things work. It primarily serves to cover up gaps in knowledge.
Military jets are designed by people with PhDs, built by people with bachelors degrees, and operated by people with high school diplomas. A thermostat is a complicated contraption that can break down for a variety of reasons and fail to work for a larger variety of reasons. To have a model in which you say "I don't care about studying how something works, I just know to press the red button when it is too cold and the blue button when it is too hot" is not an academic achievement, it's an excuse. It's dumbing things down to the level of the operator.
Friedman had a lot of these excuses.
But when things stop working, you need to bring back the designer, or at least the engineer, so that they can tell you "Well, a window is open, so it doesn't matter how many times you press the red button, nothing will happen". The response of the Friedmanite is "You just need to press the red button harder".
Posted by: rsj | July 27, 2012 at 04:01 PM
Cars don't have thermostats, though they do have cruise control.
Posted by: Patrick R. Sullivan | July 27, 2012 at 04:22 PM
Patrick: cars do too have thermostats (to control the flow of coolant through the radiator)! But I get your point. My driver is just a human cruise control.
rsj: "Friedman had a lot of these excuses."
It's doubly ironic you say that. If you read the first page of the Maurice Peston link, you will see this argument was first used, by Keynesians, against Friedman's empirical findings.
"But when things stop working, you need to bring back the designer, or at least the engineer,..."
Who are the designer and engineer who designed and built the economy? (Hayek had some good things to say on that subject, IIRC).
JW: "So you need to assume everything in your post, *and* that there is some optimal speed that is changing all the time, which we cannot observe but the driver can."
Hmm. I need to think about that case.
"Also, can't we observe the mechanism that is supposed to link the gas medal to the car's wheels."
Well, we try to. And that's why we build models. To see if our theory of how the gas pedal is related to speed is consistent with what's happening in the gas tank, so we have an additional test.
But if the car's engine were just another lot of control systems, run by a whole lot of individual firms and households, to whom the gas pedal is a hill.....Dunno.
MF: "Friedman's thermostat tool should not, indeed cannot, be used as a tool to understand purposeful human behavior, the subject matter of economics."
Oh yes it can, and must. Indeed, if there weren't purposeful human behaviour (or something that mimics it) controlling the gas pedal, and it was just going up and down at random, we wouldn't have to worry about this.
But yes, we cannot think of the government policymaker as being the only controller/planner. The whole economy is people controlling/planning, as you say. All stacked together.
Posted by: Nick Rowe | July 27, 2012 at 04:46 PM
Epidemiologists know about this one: they call it "confounding by indication". They all know, for example, that it explains why people taking blood-pressure pills are more likely to have heart attacks than people not taking blood-pressure pills, and they know that there are serious limits on what you can possibly learn from observational data, which is why randomized trials get done.
Observational data aren't quite useless, because there are genuine constraints on what information the doctors have and how often they change patients' prescriptions, but confounding by indication is a widely-understood problem.
My cynical view is that it's precisely because macroeconomists can't do randomized experiments that they have a strong incentive to believe that other ways of dealing with confounding are viable.
Posted by: Thomas | July 27, 2012 at 06:07 PM
I do see the econometric value of this perspective, but it does sound like it has dangerous potential to be used to "try to cover up gaps in knowledge" to defend a dubious theory.
Posted by: wh10 | July 27, 2012 at 06:21 PM
Nick,
I was referring to Friedman's dictum that it doesn't matter if the assumptions of a theory are plausible as long as it produces good results, because that, combined with this second dictum says that we can't tell if the results are good.
In general, it would seem to me that given the difficulty in predicting outcomes, even more scrutiny should be placed on whether the foundations of model are good, in terms of describing how people/firms/institutions really behave. Getting the plumbing right is critical if you are hamstrung on the econometric front.
Posted by: rsj | July 27, 2012 at 06:26 PM
Great post. For example three I'd cite the lunatic interwar central banks: "Let's see what happens if we reduce the monetary base by 17% between 1920 and 1921." Or "let's see what happens if we target unemployment during the 1960s under the assumption that the Phillips curve is stable."
Patrick mentioned cruise control. I've already got a patent out on that--it's called NGDP futures targeting.
Posted by: Scott Sumner | July 27, 2012 at 07:49 PM
What exactly are you trying to critique, Nick? I have taken a control-systems courses. All you have said is that an open-loop equation is not a closed-loop equation. Or in other words that exogenous variables are not endogenous variables. This is easy to to accommodate, just use a different model. If we use a closed-loop model we measure different parameters like the rise time, settling time and error. From this we can compute the input given the observed output. The observations are different, but not hard at all.
Milton Friedman didn't say anything truly profound, just that we need to use a slightly different model.
As long as we know there is some sort of linkage between the gas pedal and the observed speed, we can use a quick and easy model to see what is going on. So our driver is just a PID controller. Welcome to the 19th Century.
Posted by: Determinant | July 27, 2012 at 09:04 PM
Ignoring the whole message behind the thermostat idea, in the car and hill example you simply break down the car speed into horizontal and vertical components.
Posted by: Jason Collins | July 27, 2012 at 09:27 PM
Let's add Hayek's Blender:
There billions of inter-related learners adjusting their choices to trillions of imperfect but inter-connected price signals -- a local individuality of judgment guaranteeing "sensitivity to initial conditions" -- and this complex inter-related & ordered yet non-linear, chaotic system does not produce numerical constants or simple, linear, constant relations between 4 or 5 numerical relations even for as short a time as 4 or 5 years.
It doesn't because because of Hayek's Blender -- the fact that Hayek's knowledge is "solved" via billions of individuals learners adapting their judgments using imperfectly understood and individually assessed changing local conditions and imperfectly understood and judged changing relative price signals.
Hayek's Blender makes shows Friedman's Thermostat to be a fantastical and scientistic fantasy dream of what economists wish the economic problem of macroeconomics was like, but which it never, ever can be -- Hayek's Bender means that Friedman's Thermostat is a cargo-cult mirage, an impossible construct as a model of what happens in the economy. In short, a fantasy.
Perhaps an instructive fantasy for some purposes, but one fraught with the likelihood of doing more harm than good by giving economists a deeply false picture of a simple 3 or 4 variable linear system withof fixed constants and fixed laws (such as those of gravity, Newtonian physics, gasses and combustion) and with no sensitivity of initial conditions -- one which economists are suppose to mistake for the economic system.
It's a false picture of the economy -- in fact, an impossible one.
Posted by: Greg Ransom | July 27, 2012 at 09:41 PM
Nick, great post.
One possible answer for why economists are unaware of this: I think they talk about it in the confused and obscure terms of "structural" vs "reduced form" modeling. It's the basic debate: can you estimate equilibrium relationships without an equilibrium model? No, but how do you know what equilibrium model to use?
Posted by: YM | July 27, 2012 at 11:27 PM
This post is a bit of an embarrassment, most economists/econometricians are fully aware of this problem, it it known as the "problem of endogeneity."
Posted by: Dave | July 28, 2012 at 12:28 AM
We may not want to experiment with economies, but we can experiment with economists.
Record the inside temperature, outside temperature, and heat output of the furnace. Then give them the data. How many will be able to reverse engineer the PID control? Extra marks if they can state the transfer function.
Posted by: Patick | July 28, 2012 at 02:08 AM
I confess I did not finish the Mulligan piece, I cringed so much.
Mulligan: "Many empirical studies have confirmed this sort of result"
Ouch! Confirmatory evidence is weak, weak, weak! {Sigh}
There were statements that were even more cringe-worthy, but the "thermostat" is about confirmatory evidence.
Posted by: Min | July 28, 2012 at 02:51 AM
Nick Rowe: "Who are the designer and engineer who designed and built the economy?"
Oh, that's easy. It's the Invisible Man, the guy with the Invisible Hand.
;)
Posted by: Min | July 28, 2012 at 03:03 AM
In reality it is even worse than this. In reality, drivers aren't good. At the least, they have a reaction time, so there's a lag, and typically they will have a tendency to underestimate changes (extrapolation, hysteresis.) The result will be that the car goes slower when going up the hill, and faster down the hill. So when observing the gas pedal, the passenger will see a *negative* correlation between the gas pedal and the speed. This is especially true for steep hills.
Unfortunately misinterpreting a negative correlation where there really is a positive one due to this effect is widespread in the current economics discussion, including among (respected, but not by me) economics professors. I think you should not have to think very hard to find examples.
Posted by: Christiaan Hofman | July 28, 2012 at 06:20 AM
Dave: "This post is a bit of an embarrassment, most economists/econometricians are fully aware of this problem, it it known as the "problem of endogeneity.""
That's a bit like saying: "most economists/econometricians are fully aware of this problem, it is known as "the problem that sometimes stuff goes wrong in empirical work"."
Let's see how embarrassing it is for you to answer these questions:
Suppose you are Governor of the Bank of Canada. Your mandate for the last 20 years is to target 2% headline inflation at a 2 year forecast horizon. Suppose you have the overnight rate exactly where you want it to be. Then Statistics Canada releases its latest inflation data. Headline inflation is above where you expected it to be, and core inflation is below where you expected it to be. You want to know whether to adjust the overnight rate up or down.
1. What regressions would you estimate to help you decide what action to take? (A rough answer would suffice).
2. Suppose your econometrician told you that neither headline nor core inflation, either singly or jointly, had any ability to forecast future headline inflation at a 2 year horizon, over the last 20 years' data.
Would you be surprised by that result? How would you interpret that result? What action would you take if you believed that result?
(Hint: "leave the overnight rate unchanged" is the wrong answer.)
3. My guess is that there are a hundred or so empirical studies that try to answer the question of whether inflation-targeting central banks should respond to core, or headline, or both, as indicators of future inflationary pressure.
In my links above I cite 3 such studies that do it wrong (one from the Atlanta Fed, one from the St Louis Fed, and one from the ECB).
Are you able to find a single study that does it right?
(I explain how I would do it right in my last two links above.)
Over to you, Dave.
Posted by: Nick Rowe | July 28, 2012 at 09:14 AM
Thomas: "Epidemiologists know about this one: they call it "confounding by indication". They all know, for example, that it explains why people taking blood-pressure pills are more likely to have heart attacks than people not taking blood-pressure pills,..."
Lovely example. Again, it's a "control" problem. And if blood-pressure pills were 100% effective, and if everybody who could benefit from them took them, we would presumably see zero correlation.
wh10: "I do see the econometric value of this perspective, but it does sound like it has dangerous potential to be used to "try to cover up gaps in knowledge" to defend a dubious theory."
I see your point. But I think it cuts both ways. If we ignore this perspective, we could also use that to try to cover up dubious theory. For example, a theory that says that instrument and indicator have zero effect on the target variable, and points to zero correlations to support that theory.
But yes. We need to think how we could distinguish those theories empirically despite this problem. Which is what I am trying to do.
Determinant: "If we use a closed-loop model we measure different parameters like the rise time, settling time and error."
How do we measure those parameters, without doing experiments?
Jason: "Ignoring the whole message behind the thermostat idea, in the car and hill example you simply break down the car speed into horizontal and vertical components."
Hmmm. I *think* the problem is that we can't separate speed/inflation (say) into those two components. All we observe is the speedometer. We don't know how big an effect the hills have on the speedometer.
YM: "I think they talk about it in the confused and obscure terms of "structural" vs "reduced form" modeling."
Thanks! Maybe what's happening here is that it's impossible to estimate the structural parameters. For example, suppose the true model is S=aG-bH+e where the variables are: Speed, Gas, and Hill, plus a random error. And the driver sets G=(b/a)H. The reduced form is S=e. You can estimate (b/a) simply by watching the driver. But you can't estimate a and b. They might both be zero, for example.
It gets even trickier if the driver can observe some hills that the econometrician can't. Because the econometrician is tempted to interpret the driver's responses to those unseen hills to driver error (monetary policy "shocks"). And this is a realistic assumption. (And, Greg, it's how I interpret Hayek).
If the econometrician can observe some hills that the driver can't, then you can estimate the structural model. But that's a very implausible assumption. (Maybe the econometrician can use hindsight??)
Posted by: Nick Rowe | July 28, 2012 at 10:17 AM
Patick: "Record the inside temperature, outside temperature, and heat output of the furnace. Then give them the data. How many will be able to reverse engineer the PID control? Extra marks if they can state the transfer function."
I can't. I don't know what it means. Tell us more. Keep it very simple (violate the laws of physics if you need to, to keep it simple).
Min: yep!
Christiaan: "At the least, they have a reaction time, so there's a lag,..."
Yep! And that lag is almost an occupational hazard in central banking. So, yes, we typically see a correlation with the wrong sign, at short horizons.
Posted by: Nick Rowe | July 28, 2012 at 10:23 AM
Agreed, Nick.
Posted by: wh10 | July 28, 2012 at 11:35 AM
I'm confused. If it is impossible to learn anything about the relationship between policy and its target from historical data, where does the central bank get the information it needs to hit its target? If there is no observable correlation between the gas pedal and the ca's speed, how does the driver know when and how much to press the pedal?
I feel almost like this post is saying: "Assume that there is no way to know if monetary policy is effective. Now, assume that you know monetary policy is effective."
Posted by: JW Mason | July 28, 2012 at 11:49 AM
JW: "I'm confused. If it is impossible to learn anything about the relationship between policy and its target from historical data, where does the central bank get the information it needs to hit its target?"
Bingo! Exactly the same puzzle I've been worrying about for the last dozen years (or was, before the financial crisis hit).
Short answer: learn from your own past systematic mistakes. If you notice the car has been slowing down when going up hills, then press the gas pedal down a bit harder next time you come to a hill. If you notice the car has been speeding up when going up hills, then press the gas pedal down a bit less next time you come to a hill. And how do you know something is a hill? Ask yourself two questions: last time you saw something that looked like this, did you press down on the gas pedal? ; if not, did the car slow down?
Posted by: Nick Rowe | July 28, 2012 at 12:07 PM
JW: And, my guess is, this is what central bankers actually do in practice. I tried to formalise it all once.
Posted by: Nick Rowe | July 28, 2012 at 12:10 PM
Nick,
You don't want to address the fact that the Friedman Thermostat example only has purchase as a parallel case for systems with fixed constants and fixes laws and only 3 or 4 variables -- when the economy is NOT hat kind of system?
And don't you want to address the false/pathologic conception of the economy which result from adopting the false understanding that, contrary to fact, it 'is' such a system.
Posted by: Greg Ransom | July 28, 2012 at 01:28 PM
Nick: Oh my. I'll try, but Google would probably do a better job. A thermostat (or a cruise control, but not autopilot) usually use what's called Proportional, Integral, Derivative control. It doesn't require a physical model of the system being controlled (unlike more sophisticated control systems like autopilots, that do). The proportional part means that the controller looks at the difference between the current and actual output, multiplies it by fiddle factor (aka gain), and feeds this to the plant (e.g. the engine, the furnace). The result is that the further away you get from the setpoint, the more the plant tries to bring you back. The integral part means that it remembers accumulated errors, limits the accumulator to a maximum and minimum value, multiplies the result by a gain and feeds it to the plant. This tends to eliminate long term errors. The derivative part tries to anticipate changes by looking at the rate of change of controlled variable, applies a gain, and feeds this to the plant. PID control adds them all up and this becomes the command to the plant. The transfer function is the mathematical representation of all this.
I suspect someone (with appropriate inspiration) could probably reverse engineer all this just by looking at inputs and output, but they'd have to look at more than just simple correlations. At a minimum they'd have to realize that it was a feedback system, that there was a setpoint, and they'd have to look at the error term instead of the raw values.
Now back to my Newel post.
Posted by: Patick | July 28, 2012 at 01:30 PM
Greg: Lets see. You want to start a debate, where the thesis is "This house believes that there are only 3 or 4 variables and that all parameters are constant over time"? And you want me to argue for which side?
Patick: "I suspect someone (with appropriate inspiration) could probably reverse engineer all this just by looking at inputs and output, but they'd have to look at more than just simple correlations. At a minimum they'd have to realize that it was a feedback system, that there was a setpoint, and they'd have to look at the error term instead of the raw values."
But the whole point of this post is about whether you are correct in what you "suspect", and what the "appropriate inspiration" would need to be. Because those (multivariate) correlations are all we economists have got to work with.
Posted by: Nick Rowe | July 28, 2012 at 01:42 PM
Nick: Well, that's what Nobel's are for, eh? I agree that in general, statistics aren't going to mechanically give you underlying dynamics. It takes some creative thinking to figure it out. Is that really a problem? Or even deep?
Posted by: Patrick | July 28, 2012 at 03:09 PM
Nick:
I've talked about this before, but Patrick said most of it. I'll add that Engineers have something called the Laplace Transformation, which converts differential equations (PID controllers are second-order differentials) into simple algebraic expressions. "s" is the differential operator, and 1/s is integration.
You're last post was all about simple linear formulae and that doesn't apply here.
Patrick is correct in that it is all about error.
"Tuning" a PID controller can involve simple or complex methods, it depends on how good you want to be. A simple case is listed in Wiki at http://en.wikipedia.org/wiki/PID_controller under "PID Pole Cancellation"
But the whole point of this post is about whether you are correct in what you "suspect", and what the "appropriate inspiration" would need to be. Because those (multivariate) correlations are all we economists have got to work with.
Econometrics comes in here because tuning is iterative, the parameters may be refined with new data. You are calling for an iterative process, not a quick deterministic equation. This is why we have computers with iterated loops that make short work of such tasks.
For such a control model to work, you have flow diagram between Savings, Investment, Consumption and Production, where Savings and Investment are integral/derivative terms and Consumption is a linear term, like a resistor. Production is like a voltage source.
Posted by: Determinant | July 28, 2012 at 04:30 PM
Nick, before you engage with Greg, consider this.
Greg, do Austrian economists purchase life insurance? If so, they have no valid objection to aggregation, because aggregation is what makes life insurance work.
Keynes was the chairman and chief investment officer of a life insurance company, that is very likely where he got his aggregation idea. It wasn't pulled out of a hat, it came from real life.
Posted by: Determinant | July 28, 2012 at 04:33 PM
I think answering this kind of problem is the point of Judea Pearl's work, which, unfortunately, I do not yet grasp. However I notice he has just written a paper complaining about economics: http://ftp.cs.ucla.edu/pub/stat_ser/r395.pdf
Posted by: Alex | July 28, 2012 at 06:06 PM
Alex: that looks like an interesting paper. But I think it's about a different question.
Posted by: Nick Rowe | July 28, 2012 at 07:27 PM
As a practical point, it is perfectly acceptable when solving a complex system iteratively to take a guess for the initial conditions. You then send the system (using a computer, most often) into a loop. The parameters are updated and then the loop runs again, you stop after the parameters stop changing by a set percentage, which is the accuracy you desire.
I have a simulator of Ontario's electrical grid on my computer which uses this technique.
You should have a nice numerical methods course over in Carleton's Enginering faculty.
Posted by: Determinant | July 28, 2012 at 07:59 PM
For another example of the same idea being reinvented over and over, I used a strikingly similar driving analogy to monetary policy last year, even down to the pedal-to-the-floor/zero-lower-bound analogy, although my target was not bad econometrics but bad monetary policy analogies.
I'm guessing it felt as obvious to you, Friedman and Peston as it did to me.
http://clubtroppo.com.au/2011/09/02/a-disturbing-wikileaks-cable/
Posted by: Richard Tsukamasa Green | July 28, 2012 at 08:54 PM
Richard: Good post. God, I am disturbed to see the RBA even thinking about using that "save some ammo for later" metaphor. I thought the RBA was doing so well.
Posted by: Nick Rowe | July 28, 2012 at 09:10 PM
The law of large numbers does not apply to nor solves the problem of phenomena with 'sensitivity of initial conditions" or which are essentially complex phenomena.
Besides that, you are dueling with a bogus straw man which has nothing much to do with Austrian economics, as far as I can tell.
"Greg, do Austrian economists purchase life insurance? If so, they have no valid objection to aggregation, because aggregation is what makes life insurance work."
Posted by: Greg Ransom | July 28, 2012 at 09:59 PM
THIS thesis:
"This house believes that the insight of "Friedman's Thermostat" comes embedded in a costly package -- a package which assumes or encourages scientists to believe that the variables of macroeconomic phenomena have simple, linear, law-governed relations like those which gas engined cars have with hills. A belief which for reasons given elsewhere (ie above) is false."
You can argue whatever side you wish.
Let me assert that Friedman clearly assumed this false belief -- and based his whole scientific program on the conceit that an 'instrumentalist' liming of these was the sum and substance of economic science.
I'd further assert that this false picture of science and economic science is one of the foundations of the many wide ranging and ongoing explanatory anomalies which have put macroeconomic science in a status of permanent crisis.
Nick writes,
"Greg: Lets see. You want to start a debate, where the thesis is "This house believes that there are only 3 or 4 variables and that all parameters are constant over time"? And you want me to argue for which side?"
Posted by: Greg Ransom | July 28, 2012 at 10:13 PM
Determinant, Hayek was one of the first economists to study what was modern statistics at the time, and his grandfather was one of the first professors of statistics. Ludwig Mises' brother was a leading theorist in the area of statistics & probability, and knew the ins and outs of this work.
They understood the basics of statistics and probability theory, and they perhaps better than most today, they knew how and why and where statistics and probability theory run out in claims to have a grip on phenomena and patterns of central concern in the social, psychologica, economics, and historical domains.
Posted by: Greg Ransom | July 28, 2012 at 10:23 PM
The law of large numbers does not apply to nor solves the problem of phenomena with 'sensitivity of initial conditions" or which are essentially complex phenomena.
Besides that, you are dueling with a bogus straw man which has nothing much to do with Austrian economics, as far as I can tell.
No, Greg, your constant invocations of complexity are classic and well-known Austrian attacks against aggregates and Keynesian methods. I provided a direct rebuttal.
Further, life insurance mortality tables are not a large-numbers experiment in that the specific circumstances of when, where and how people die are uncontrolled, or rather that all possible results are taken into account because people have free reign of when, where and how they die. Individually people die at random, but in large numbers (in aggregate) they die with clockwork regularity and predictability.
As long as there is something common such as money changing hands, we can aggregate. It's perfectly valid to say "Everyone can buy whatever they want from what is available" and then measure average expenditure in dollar terms.
Any Austrian economist who purchases life insurance has essentially lost the epistemiological debate. That is really what you are trying to start and yet you don't realize the rug right there under you, just asking to be pulled.
Posted by: Determinant | July 28, 2012 at 10:24 PM
Nick: Draw an isocline map for speed in gas-steepness space. I know which isocline I'm on because I know the unchanging speed. Observing gas input at different values of steepness tells me the slope of the isocline and how that slope changes. What I can't observe is the application of gas necessary to move me across isoclines because the driver is in perfect control and stays on the one isocline. Is your argument that I can't use that data to differentiate between the hypothesis that there is an isocline and the alternative that the partial derivatives with respect to gas and steepness are both zero, and there's an exogenous constant determining speed?
Posted by: BSF | July 29, 2012 at 01:10 AM
This neither an apt analogy or metaphor. You cannot just pick and choose variables in the analogy to make them fit theory. Doing so and saying we cannot reason this makes it a paradox...by setting the conditions to suit it. Applying gas has an effect of preventing the car from slowing while traveling up hill, not making it go faster as stated. Not slowing down, in itself, is an effect that can be reasoned and measured. We can calculate how slow the car would be traveling at any point on the hill.
Posted by: Norme | July 29, 2012 at 01:20 AM
Further, Greg, Austrian claims about "complexity" completely ignore the multiple mathematical tools available to answer the question "What is the limiting behavior of an average as when the number of samples in an average approaches infinity?" Limits, probability theory, all marvellous tools that allow us to make conclusions about the behaviour of random variables, yet the Austrian position is that they are inapplicable. Nonsense.
Nick, not for the first time, has come close to reaching for control-theory and Laplace Transformations in economics thinking.
Austrian economics is much further behind than Nick. They can't get over the lesson illustrated in that regardless of what kills you, in all its diverse circumstances, as a whole the mortality of our society is a predictable, steady factor. The level of consumption follows just the same logic, hence economic aggregates.
St. Hayek is no proof either. His thinking had distinct limits. If you look at Roger Garrison's new diagram model expanding on Hayek's thinking in Time & Money, I can tweak that model in two steps to show precisely how a Keynesian depression occurs and why a Keynesian strategy works. No fundamental tenets of that model are violated either. Whose got the blinders here? Roger Garrison would probably scream, but so what?
Posted by: Determinant | July 29, 2012 at 01:24 AM
Determinant -- you are kidding yourself if you think you know what you are talking about or what the issues are.
You need to seriously reflect on what I've just said.
Posted by: Greg Ransom | July 29, 2012 at 06:14 PM
????
" They can't get over the lesson illustrated in that regardless of what kills you, in all its diverse circumstances, as a whole the mortality of our society is a predictable, steady factor. The level of consumption follows just the same logic, hence economic aggregates."
Determinant -- drop the sock puppet, open a Blogger account, and make your argument in detail.
What I see here is stuff makes no sense & other stuff that is part plain false, and lots that isn't supported.
And you've never passed the most basic "Turing Test" exhibiting any competence in the work of Menger, Hayek, Mises, Kirzner, Horwitz & Boettke.
Posted by: Greg Ransom | July 29, 2012 at 06:32 PM
Right now, I have other battles to fight. The bottom line, Greg, is you've tried to mount a classic Austrian argument, you tried to get Nick involved (yet again) and if I've cut this tangent short, I've won, that's what I wanted. It's the same sort of tangent the MMT'ers do and which makes my eyes roll.
Nice ad hominems, though. Also a decent Denying the Antecedent fallacy there. (If I really knew Hayek, I'd agree with you).
You need to seriously reflect on what I've just said.
No. Honestly, I'm not going to think about it at all. Maybe chuckle to myself, but that's it.
Posted by: Determinant | July 29, 2012 at 06:52 PM
You are deluded, "Determinant".
And still a sock puppet .. which is telling.
Posted by: Greg Ransom | July 30, 2012 at 12:08 AM
A Sock puppet is a second name registered on the same board where you don't admit to the second login. You can literally carry on a conversation with yourself, which is why the better boards ban the practice.
I am merely a poster on this board who uses a nickname, as do half the posters here, many of long standing.
An ad-hominem and one that doesn't even use the correct terminology? Wow, that was a damp squib of a retort, Greg.
Posted by: Determinant | July 30, 2012 at 12:23 AM
Well, sorry for putting my free thinking two cents in. You experts enjoy yourselves.
Posted by: Determinant | July 30, 2012 at 09:32 AM
Oh, and I forgot to thank you for the insults.
By the way, I logged in with an account, the site gave me the "Determinant" moniker.
Posted by: Determinant | July 30, 2012 at 09:34 AM
"Determinant" -- I don't trust blog commenters who won't use their own name. That's a product of long experience, nothing personal.
Here are some facts:
1. Austrians use aggregates. You're claims about Austrians are prima facie false.
2. Your claims about complex phenomena, et al, and their relation to statistics not true and don't seem to be well informed. You haven't even tried to establish otherwise.
3. You haven't show any grasp of the Austrian case against Keynes' use of aggregates.
4. You haven't explained by you don't open an Blogger account and make you case in a forum which will allow you to make the case you wish to make.
Posted by: Greg Ransom | July 30, 2012 at 05:46 PM
Greg and Determinant. Let's not continue this.
Posted by: Nick Rowe | July 30, 2012 at 06:39 PM
And I am now somewhat upset at several econometrics professors.
Posted by: Edmund | July 30, 2012 at 08:27 PM
Mostly because it's a better example of endogeneity than gun crime and gun legislation.
Posted by: Edmund | July 30, 2012 at 08:56 PM
Determinant,
I'm not so sure that control theory hasn't been applied in some fashion to economics.
Here's a book about it:
http://mitpress.mit.edu/catalog/item/default.asp?ttype=2&tid=12731
Posted by: Edmund | July 30, 2012 at 10:43 PM
Andy Haldane of the Bank of England presents one dimension of my point:
"a rather restricted and blinkered view of the dynamics of social and economic systems got carried across into how public policy was thought about and executed.
Again, I have an evolutionary story as to why economics ended up in this place. I don’t think it’s because people were taking pay-checks from consultants or countries to cook the answers. I don’t think for a minute that was the core of it. It was driven by the quest for certainty, and mathematisation of economics was a means of achieving that certainty. It was the desire to have the laws of economics as well-defined as seemingly were the laws of physics or other natural sciences, as a basis for policy experimentation. They were all good reasons for wanting to make the discipline rigorous and robust.
I think one of the great errors we as economists made in pursuing that was that we started believing the assumptions of economics, and saying things that made no intellectual sense. The hope was that, by basing models on mathematics and particular assumptions about ‘optimising’ behaviour, they would become immune to changes in policy. But we forgot the key part, which is that the models are only true if the assumptions that underpin those models are also true. And we started to believe that what were assumptions were actually a description of reality, and therefore that the models were a description of reality, and therefore were dependable for policy analysis.
With hindsight, that was a pretty significant error."
Posted by: Greg Ransom | August 01, 2012 at 01:00 PM
So, Nick, this is an argument that is out there, made by the Executive Director of Financial Stability at the Bank of England, Andy Haldane.
Do you not see that the picture of economics being countered by Haldane is the picture built into your car example, ie a simple physical system following the most simple, linear physical laws, ie one or two variable math formulae?
Posted by: Greg Ransom | August 01, 2012 at 01:04 PM
As has been mentioned, the flaw of this concept is that feedback control systems (like a thermostat) are perfectly well understood. If you gave me a time series with the outside temperature and inside temperature I could calculate the transfer function of your thermostat/furnace in two minutes... and I would have a statistic called coherence (like an R-squared) to determine how well this model fit. Even if we exclude the possibility of choosing a better model (frequency vs. time domain) to fit to the data, we're left with outside temperature correlating with the furnace (or the pedal with hills) which, while imperfect, is not erroneous.
Posted by: Jason Hamner | August 07, 2012 at 09:44 PM
Thanks for an interesting blog post!
I do not want to go into the quibbles of Austrians vs. Keynesians, but I must say a couple of things in defence of modern empirical economists. First, the the concept of permanent income hypothesis, is developed in several widely cited scholarly articles whereas the concept of the thermostat is developed in an op-ed column published in wall street journal. This is a natural reason why the first concept is much more widely known among economists than the second.
Second, even though the friedman's thermostat may not ring many bells, the ideas underlying it are hardly unknown to people doing empirical economic research. As someone already pointed out, even undergrads know of endogeneity. Moreover, the concept of cointegration was defined already in the seventies (in your example speed, gas and hills seem to be cointegrated) and there is tons of research taking advantage of different kinds of natural experiments/instrumental variables to estimate causal effects (your example number three also points to this direction).
Of course, being a trained economist, you must already be aware of this.
Posted by: o.k. | August 09, 2012 at 04:25 AM
Hi Nick,
Obviously a bit late to the party, but to defend the applied econometric folks (God help me), but it may jus be an issue of lingo. The thermostat concept is alive and well (although when I was taught on the subject Mr. Friedman was not given credit) and is generally discussed under state space modeling and feed back errors (http://en.wikipedia.org/wiki/State_space_representation#Feedback). You may find the Google search reveals a bit more with this terminology.
And of course, if I've misinterpreted your post, my apologies.
Cheers,
Finn
Posted by: Finn | August 10, 2012 at 12:22 PM
Finn: (Never too late!), and o.k. too:
Dunno. Sure, econometricians talk about endogeneity. And maybe they talk about feedback errors too. But I'm going to repeat the response I've made to Dave above (though minus my rudeness to Dave, because you two have been nice to me):
Dave: "This post is a bit of an embarrassment, most economists/econometricians are fully aware of this problem, it it known as the "problem of endogeneity.""
That's a bit like saying: "most economists/econometricians are fully aware of this problem, it is known as "the problem that sometimes stuff goes wrong in empirical work"."
Let's see how embarrassing it is for you to answer these questions:
Suppose you are Governor of the Bank of Canada. Your mandate for the last 20 years is to target 2% headline inflation at a 2 year forecast horizon. Suppose you have the overnight rate exactly where you want it to be. Then Statistics Canada releases its latest inflation data. Headline inflation is above where you expected it to be, and core inflation is below where you expected it to be. You want to know whether to adjust the overnight rate up or down.
1. What regressions would you estimate to help you decide what action to take? (A rough answer would suffice).
2. Suppose your econometrician told you that neither headline nor core inflation, either singly or jointly, had any ability to forecast future headline inflation at a 2 year horizon, over the last 20 years' data.
Would you be surprised by that result? How would you interpret that result? What action would you take if you believed that result?
(Hint: "leave the overnight rate unchanged" is the wrong answer.)
3. My guess is that there are a hundred or so empirical studies that try to answer the question of whether inflation-targeting central banks should respond to core, or headline, or both, as indicators of future inflationary pressure.
In my links above I cite 3 such studies that do it wrong (one from the Atlanta Fed, one from the St Louis Fed, and one from the ECB).
Are you able to find a single study that does it right?
(I explain how I would do it right in my last two links above.)
Over to you, Dave.
Finn and o.k. : what do you think? Maybe I just got a dud sample of applied econometric studies of whether inflation targeting central banks should look at core or headline inflation? (That's not a rhetorical question, because maybe I did get an unlucky sample.)
Posted by: Nick Rowe | August 10, 2012 at 02:37 PM