« Trends in OECD corporate income tax rates | Main | Are we hard-wired for capitalism? »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

First, you're conflating forecasting models with macroeconomic models that might be useful for policy. The two are rarely the same and are often in conflict. In your example, X might be "consumer confidence" and Y could be consumption, in which case your comment boils down to the statement: "what good is my model for consumption if I cannot forecast consumer confidence? But consumer confidence isn't a structural variable, it is just an indicator. A structural model of consumer behaviour would have to have income, interest rates, and so on in it. These are fundamentally endogenous variables and so, when the model is solved for its reduced form, the X would just be a vector of shocks. Some of those shocks are perturbations to policy variables--the residual to a Taylor rule, for example, and the modeler can ask what perturbations would be helpful to policy objectives. Alternatively, the modeller can ask what sequences of shocks to, say, the determinants of personal income would render certain outcomes and how likely are those shocks? He or she could even ask, "given what consumer confidence is this month, how likely is it that I will get a good reading for personal income this quarter?"

The upshot is that it isn't merely an exercise in forecasting. At a minimum, it is conditional forecasting where the conditioning is on a path for a policy variable. (There is also the issue of whether the structural parameters that link perturbations to the policy variables can be taken as given when policy changes--the Lucas critique--but that added complication is immaterial to the central argument here.)

Back to your model, if one wanted to stick with an astructural model of consumption and consumer confidence, one would have to endogenize consumer confidence and identify the shocks that move confidence around. That is what identified vector autoregressions are all about, and economists do that sort of thing all the time with mixed success. This is where forecasting and structural modeling conflict. Consumer confidence is often helpful for short-term forecasting, but its linkage to the true state variables of the economy is weak and apparently time varying. That's why model users who have to forecast use things like consumer confidence to assist their forecasts but rarely put such things directly into their models.

It is the economic agent that does the forecasting, the economist is simply trying to model the agents' method.

There's a second type of forecasting you can use as well that is quite successful in some situations (specifically situations where you have tens of thousands of historical data points, such as daily data). One summer I was a research assistant doing work on bond price movements and it's one of the methods we used. My boss/future father-in-law called it an 'Australian Weather Model', for reasons I've never understood. It works as follows:

You're trying to forecast X, and you have data on a number of other factors. Say you're trying to forecast whether or not it will rain tomorrow, and you have data on the temperature, humidity levels, wind speeds, etc. Call these variables Y

You find the 20/50/100 days in history that most closely match Y in terms of mean-squared-error. Say there are 100 days in history that closely match today's data, and learn that for 83 of those 100 observations it rained the next day. Then the probability of precipitation is 83%. (That's over-simplified, because you might want to weight some observations more than others, but you get the general idea).

Notice that the model is completely theory-free, beyond 'these variables are related to each other somehow'. It's strictly a data-mining exercise. Similar models are used to project the future performance of baseball players, etc. (If you want to know how Aaron Hill will perform in 2011, you find the X players through history most like Aaron Hill circa 2010 and see how they performed the next season, adjusted for changes in context).

Interestingly enough, in many finance situations these theory-free models perform better than models based strictly on theory. The highest performing models combine a mixture of the two.

Interestingly enough, in many finance situations these theory-free models perform better than models based strictly on theory. The highest performing models combine a mixture of the two.

When? During verification or during validation?

"When? During verification or during validation?"

Better performing in the sense of making more accurate prediction using out-of-sample data.

RE: "Even worse, I have never heard of a single macroeconomic model that has the lag structure Y(t)=R(X(t-1))."

I'm not a macro guy (and I'm *really* not an applied macro guy), but I assumed all predictive models were of this form. The quant finance models we worked with were. Otherwise you have a model like MERT which is of the Y(t)=R(X(t)), which doesn't really predict anything beyond 'If X were equal to 90, what (roughly) would the value of Y be?'

Unfortunately, most models are only general approximations of reality. There is always a chance that we have estimated one of the variables incorrectly, our equasion does not give enough weight to one of the vaiables, we have not considered a variable that can have a significant impact because it happens rarely, or our model is loaded with Kahneman/Tversky glitches.

Nassim Taleb in "Fooled by Randomness" or "the Black Swan" I can't remember which, says that we live in "extremistan," the world where unanticipated events impact things with greater influence then we expect, but rarely and unpredictably. Models, on the other hand, are made for "mediocratistan," the world where everything is predictable according to statistical means (if I may paraphrase loosely). Look no farther than the "Safe Harbour" statement at the end of any prospectus and one can see that we intuitively understand that.

That said, some degree of predictably is needed to be proactive in guiding economic policy, rather than just be reactive.

So while economic forcasts arn't pure excrement, that all smell a little funny!

Bob: I think I follow you there. When I said "model" I meant the sort of theory-based structural model we use to explain what determines what, and why. ISLM for example. But then the "forecasting models" aren't really models in this sense. Just observed patterns between time t variables and time t-n variables. Like my "curvy rulers", or VARs?

Anon: sometimes economists use models to explain agents' forecasts, as you say. But I'm talking about when economists say they use models to make their own forecasts.

Mike: OK. Again, what you describe seems to be forecasters using their ingenuity to try to find persistent stable patterns in the data. Curvy rulers, rather than structural explanatory models.

Mike: " The quant finance models we worked with were [of the form Y(t)=R(X(t-1)) ]".

But what sort of models were these "quant finance models"? Were they of the form: "Well, we've noticed that when the price of oil goes up, the price of wheat seems to go up 2 days later. We don't know why, but we are going to try to make profits of this correlation for as long as it lasts"? (Only much more complex, of course.)

Bruce. Yep. Models can be wrong in many ways. But even if a model is right, and exactly right, always, can it help us forecast even then?

I liken it to driving a car. You know you want to go North (proactive decision making), but the road goes Northeast then curves Northwest. Without a steering wheel (reactive decision making) you are soon off the road and going nowhere. You can follow a map (better model) but the map can't tell you that you need a detour because of a fallen tree blocking the road. So you listen to a road report (create a better model yet) but you (or someone else )first needed to experience being blocked in order to create the model. And on and on until you have a model that will predict with great accuracy what you need to do to get to your destination with a high probability of being right. So if we use that model to plan our route for the next vacation we take, we can be reasonably assured that everything will go as planned, thus "predicting" the future. But not always. Just ask the business traveller who was in the air on 9/11.

So should we chuck all the models out and just react? What about all the other times when what we modeled came to pass, where we got to where we wanted to?

Personally, (in my own uninformed way) I think that our "current" models can't cope with the interconnectedness of today's economic situation, (the map was designed for city driving, not the Baja 500 and no one told us we would be racing there, so we broke our car) and economics as a whole will just flail around for a while until we understand what the introduction of new variables in our ecomomy mean and how we can react in a positive way. So for now, predictablity is is low, because the variables arn't understood yet.

Now all I have to do is decide whether or not to abandon the car in the desert, wait for a mechanic to fix it, or wait for an engineer to build me a new one. And hopefully not die of thirst before that happens! ;)

I think it is more like [Y(t+1),X(t+1)]=R([Y(t),X(t)]), extended over more time intervals, so X is not entirely exogenous. Curvy rulers, but presumably more stable and predictable in their curves.

Now a question....shouldn't the "what and why" of a macroeconomic model(I assume this is an explanatory style of model) be an integral part of a predictive model? How can one even begin to be the least bit accurate in predicting a possibility if one dosn't know what happened in the past? Granted you can't know every variable when you create a predictive model, thus some variations will be present in outcome, but conflict...?

(Man, I feel like a high school student trying to carry on a conversation in a room of Ph.d's)

Wait a minute.... this is a room of Ph.d's!!!!! I'm going to go back to lurking!!

"Either I'm making some awfully simple mistake, or all the people who claim to be making model-based macroeconomic forecasts are just blowing smoke."

You're making an awfully simple mistake.

In this issue I feel for Nick, but I recognize that we need models. But I think the models, to work, can not be mathematically accurate. The economy is "Truth versus Precision." The more accuracy, less true.
We are moving between Krugman and Hayek, without being able to much more. The current crisis, does not show that the models have failed miserably?
"The life (and economics) is vicissitude," as said by the Greek historian Herodotus.

There are no simple mistakes in applied macro, Nick! Unless one counts asking, on a public forum, provocative and far-reaching questions such as: "Is the entire field of applied Macro junk?" Hopefully the next faculty lunch isn't too awkward.

In my opinion, the simplest refutation of applied macro is that hedge fund managers and other assorted punters have zero interest in the field. If macro models had even the slightest ability to forecast any better than a closed eye and a thumb held at arm's length, these guys would be drooling over them.

Unfortunately, a lot of careers and prestige currently depend on macro modeling continuing to be taken seriously, and you have to spend a LOT of time studying the friggin things to be able to confidently say that they are crap. Sadly, honest eyebrow raising like this notwithstanding, I fear it may be some time before we can finally snap our MatLab CD's in half. One funeral at a time, as they say...

Cheers,

Zdeno

Btw, may I recommend re-reading this post after a search-and-replace of "climate" for "macro"? Not to say we should all adopt the Fox News stance on global warming, but climate models may as well be tea leaves for all the falsifiability they offer. And at least tea leaves aren't overfitted!

Zdeno - That would be an extraordinarily stupid thing to do. Firstly, climate models are not statistical. They're simulations based on physics known to be correct. In a nutshell, they chop-up the world into chunks and apply all the thermodynamics, heat transfer, fluid flow, etc PDEs. The problem is the horrendous size and complexity of the models, not that they're 'wrong'. In principle, with enough computing power and the initial conditions (not easy to determine for the entirety of the planet), it is possible to model the climate. I suppose the closest analogy in macro would be something like this.

The bigger question might be: what does it mean to have Economics without Econometrics?

One thing is that in the U.S. you have data coming out monthly that effectively feeds into GDP. Macroeconomic Advisers does this. For instance non-residential fixed investment is essentially driven by core shipments. Hence, it is quite possible to get a decent estimate of GDP growth for the current quarter by the time it has finished (pretty low error). However, as you go further out, the errors get much bigger. I don't think the errors would get so large to make it useless until more than two quarters out. Anything more than two quarters out is pretty dicey though.

I vote for a Nominal GDP (or realGDP and GDP deflator) market to get better real-time forecasts.

I've only done forecasting with a VAR, which works exactly like you said - just stick a squiggly ruler on Y itself (it's only "vector" because you can't solve for Y=R(X) and thus have to stick the squiggly ruler on both at once, so it's a multidimensional squiggly ruler).

Not sure how other types of forecasting work, if they work any differently.

The variables represent future commitments, and therefore forecasting is possible

* Several economists did predict this crisis -- as well as many investors. They saw house prices rising at an unsustainable rate, household borrowing increasing at unsustainable rates, etc.

* Others were able to make the connection that a slow down in household debt growth would increases in capital liquidation, unemployment and output loss, with downward pressure on prices.

But I agree that all these forecasts are conditional -- if X happens, then Y will happen, however, you can say that the probability of X happening within time t increases to 1 as t --> inf -- you know the bubble will pop, but do not know exactly when.

isnt structural macro forecasting how, when u change exogenous variable X, how much do we expect Y to change. like, if you change interest rates, how do exchange rates change, for example...

I don't think there's anything illogical about the concept of macro forecasting. As you say, leaving aside the practice of it...

The equation is likely to be of the final form you suggest:
Y(t)=R(X(t-1),X(t)) [1]
or, even more likely:
Y(t)=R(Y(t-1), X(t-1), X(t)) [2]
which may, I suppose, be a special case of [1].

Even without knowing the exact value of X(t), these models still have predictive power for two reasons.

The first is that even with no precise knowledge X(t), you will probably have a range forecast for it, which in turn provides a range forecast for Y(t). X is likely to consist of simpler variables than Y - for example unemployment or price levels - which, at least for one or two periods ahead, can probably be forecast more accurately than Y could be directly.

This combines with the fact that X(t) will only have a diluted impact on Y. If you know X(t-1) and you have a range forecast of X(t), you will still be able to constraint Y(t) to a smaller range, even if you can't provide a point forecast. If you know that unemployment will be (say) 2 million, plus or minus 4%, the resulting uncertainty in Y will be much smaller than 4%. The degree to which the error bars are reduced depends on how big X(t-1) is in the model compared to X(t).

Pretty much all macro forecasts that you see in the real world are range forecasts - it's just that the central point of the range is often reported as if it were an actual prediction.

Now whether the models are accurate: I have no idea. I just do the maths...

p.s. a related question that came up in conversation yesterday, maybe someone here knows the answer. If an economist builds a macro model whose results are published in a journal, is there an expectation that they will also publish the data and source code of the model? We were discussing the same issue for climate modelling and I thought macro was an analogous field, in terms of the uncertainty of the data and models, the politicisation of the outcomes, and so forth.

Have I proven too much? If model-based macroeconomic forecasting isn't possible, how is model-based forecasting possible in (say) physics?

When I think of (very simple) physical systems, where it *is* possible to forecast, it always seems possible to convert the underlying model into something of the form Y(t)=L(Y(t-1)) (except that's in discrete time, of course). Think of a pendulum, or something. Its current acceleration is determined by its current position, and its current position is determined by its lagged position and lagged velocity. This works perfectly as long as there are no X(t) shocks (gusts of wind), and it works nearly perfectly if the momentum of the pendulum is large relative to the shocks.

Do economies have "momentum"? How many structural relations are there in the structural model of the form y(t)=s(y(t-1)), and how powerful are they?

RSJ's mention of debt contracts is one example. The debt contract taken out today determines a stream of payments into the future.

Capital would be another example. The Solow Growth model, for example, will allow you to predict the exact path of the economy in the transition towards the steady state equilibrium, as long as it doesn't get hit by any shocks. It's very much like the pendulum (except it won't swing back).

Demographics would be another example. Everyone alive today will be one year older a year from today, if they are still alive.

Lags in the Phillips curve might be another example, if tomorrows inflation depends on today's excess demand.

So if the structural model is Y(t)=S(Y(t),Y(t-1),X(t)), then if the coefficient on Y(t-1) is large relative to the coefficient on X(t), and the variance of X(t), then we have a model like the heavy pendulum, and the model can add a lot to the forecast.

Is that how model-based forecasting is done?

"Here is my guess about what people who claim to make model-based forecasts are actually doing. They have a model Y(t)=R(X(t)). They then make a non model-based forecast of X(t+1). They then substitute that forecast for X(t+1) into the model, and solve for Y(t+1).

Obviously, even if the model is exact and true, the forecast of Y(t+1) is only as good as the non model-based forecast of X(t+1). Garbage X(t+1) in, garbage Y(t+1) out."

I meet your misunderstanding with my own misunderstanding. What is a "non model-based forecast"? Meaning a non-structural model? (As in, economic "structure".)

Because there simply has to be some model for X. Even picking numbers completely at random is a type of model. (The formal definition of a statistical model is a family of related distributions.) And once that submodel is fixed, it becomes part of the supermodel.

Except for toy models, macro models always have a lag. The reduced form typically looks like Y(t) = R(X(t)), but X(t) can be a messy thing. It can and generally will include lags of Y (known at t-1 implies known at t, so the dating isn't a problem), variables that occur at t but won't be known till t+1, variables that occur at t but won't ever be definitely known. Most of the 'action' occurs in the part of X(t) that is comprised of lags of Y(t). In that way, the forecast of X(t+1) doesn't require a "non model-based forecast" -- the model itself 'forecasts' X(t+1). But the shocks at time t+1 will still have effects.

Even Y can include things that we don't see. For example, an 'unseen' (to the econometrician) 'shock' -- call it Z -- can be AR(1), so it's often useful to include Z(t) in Y(t). That just means that we only have a distribution (not an observation) on part of Y -- but the effect can be partially deduced from the effect on the part of Y that we do observe.

Oh, and it's also not necessarily the case that forecast-Y(t)=R( forecast-X(t) ). You always have to go back to the distribution.

Physical laws describe how systems evolve over time. Time appears explicitly. You don't need to 'forecast', the equations tell you what they system is doing for any given initial conditions and time.

I suppose it depends what you think "based" means in the phrase "model-based forecasting."

jh: OK, someone who simply extrapolates a trend without claiming to understand it, for example, can be said to have a "model". But it's not really an economic model, unless you have some sort of theory to explain why people's behaviour must or should follow a trend. I was meaning "model" in that second sense, I suppose.

jh2: Agreed. It only works exactly if R is linear, right? Otherwise you need the distribution of X as well as E(X(t)) to talk about E(Y(t)).

Patrick: some physical laws can help tell you how things will evolve over time: the law of conservation of momentum, for example. These give you equations of the form y(t)=s(y(t-1)). But most of the equations in economics models seem to be of the form y(t)=s(x(t)). And we can only tell how y evolves over time if we know how x evolves over time. But if x is exogenous, the model itself, by definition, doesn't tell us how x evolves. So we put a curvy ruler on x, and extrapolate atheoretically.

"OK, someone who simply extrapolates a trend without claiming to understand it, for example, can be said to have a "model". But it's not really an economic model, unless you have some sort of theory to explain why people's behaviour must or should follow a trend. I was meaning "model" in that second sense, I suppose."

There almost has to be something unexplained that drives the model, if it's going to be realistic. (There may be exceptions -- one can 'derive randomness' -- mixed equilibria -- in games, I guess.) In macro models, those 'primeval' shocks aren't very believable (as Kocherlakota said in his essay last year) -- but the principle of building from something unexplained doesn't strike me as a problem. We might believe weather is capable of being explained, but in an ecological model, it doesn't seem very important to 'close' this part of the model, as long as we get its stochastic properties approximately correct.

The key is in pinning down what's truly exogenous. Once that's done, I don't think the Y(t) = R(X(t)) formulation hurts anything, given how general it can be (with lags, unobservables, etc). The problem in macro is that it doesn't seem believable that the assumed shocks are actually exogenous.

"But most of the equations in economics models seem to be of the form y(t)=s(x(t))"

From a beautiful interview with Axel Leijonhufvud:

Snowdon: ...in several papers you have been critical of the rational expectations hypothesis and the
assumption of unbounded rationality that pervades modern mainstream
macroeconomics. Axel Leijonhufvud’s macroeconomic world is one populated by
agents who are ‘believably simple people’ facing ‘incredible complex situations’....


Leijonhufvud: Rational expectations are necessary in order to extend the
optimisation paradigm to intertemporal behaviour. Intertemporal equilibrium becomes
an inescapable consequence. For thirty years I’ve been convinced that this is a
conceptual cul-de-sac for macroeconomics. By now we’ve explored it enough and
learned, I suppose, pretty much all there is to learn in it. It is high time that we
extricate ourselves from it and get on with the work that has to be done sooner or later
if economics is to be a serious science.

The difference between macro and physics (even statistical physics) is that in macro there are no constants, everything is subject to change.

In Physics it doesn't matter how many variables exist or how many simultaneous equations need to be solved with boundary conditions, there are still laws that say
1. Energy is conserved and constant
2. Its partial derivatives are conserved (angular and linear momentum etc)
3. Systems move continuously to more possible configurations

What physicists do is come up with a structural model (out of a finite set of possibilities)a system could be organized in that satisfies the larger "LAWS" and test them against small changes in the "smaller" variables.

Even in Physics (statistical) small changes produce large transitions in phase and the models break down. What make physics "easier" though is that the "LAWS" aren't broken and a complimentary model can be added such that
Y=X1(t) if E < c
Y=X2(t) if c < E < d
Y=X3(t) if E > d
where t is any set of variables

In macro the closest thing to a law of Energy is money, which is a representation of the total capacity of the system, unfortunately money is not constant and there is a bunch of different versions of it that aren't related simply.

Unfortunately without a model that behaves better, which may be coming sooner or later, macro forecasting is like Alchemy. The problem is economists aren't by nature very humble and there is a lot of posturing over the accuracy of their models when at heart they have to know they are only valid over a very small bandwidth of variables. The first step is to acknowledge the limitations without throwing out the process. The next step.... who knows??? maybe require all economists to take physics and "real" math...... any takers???

Rick,

I know several economists who were former mathematicians and one who was a former physicist. I don't think this is a question of sufficient training or intelligence, or insufficient use of math or physics. I think the problem is the reverse -- too little economics and too much attention paid to math.

Take a look at the celebrated lucas tree paper: Here you have fairly sophisticated math describing truly infantile economics: a completely exogenous model of capital with perfectly random outcomes -- it's hard to subtract away more economics than this. Same thing for the island model. For some reason, as you go from the 1920s, then 1950s, 1970s, and today, the published papers become more mathematically sophisticated and more economically trivial. Concepts such as increasing returns to scale and general gluts were the bedrock of economics, and these seemed to have been erased from the discourse until very recently. Read a random sample of papers every 30 years: 1920, 1950, 1970, 2000. You will see fewer and fewer economic phenomena described in more and more complex ways.

I remember when I was in grad school, I had several economist friends, and the one thing common to all of them was they had zero interest in economics, and a lot of interest in certain mathematical models. I remember trying to explain to one of them what the federal reserve was, or to have debates about whether stocks were overvalued, or why median incomes started to fall after the Reagan era. There was zero interest or awareness about any of these things. They were much more interested game theory, and wrote papers about card playing and sports matches. Their economic beliefs were basically a panglossian mixture of libertarianism and social darwinsim. Don't get me wrong, they are nice people who are interested in many things, but economics is not one of them. To cite again from the Axel Leijonhufvud interview:

Snowdon: [...] Robert Lucas would certainly not advise his students to read Keynes’s General Theory. Do you still
recommend your students to read Keynes?

Leijonhufvud: At UCLA last Autumn (2001) I had a quite brilliant graduate student
from Bejing. He had sailed through the first year theory course with excellent grades
and he had all the mathematical equipment to master the modern techniques used in
economics. He found the dynamic stochastic general equilibrium models he was
taught to be easy but lacking any economic content. I was teaching a course that I
called ‘One Hundred Years of Macroeconomics’. So after attending some lectures he
decided to read Keynes. As a result he became inflamed with enthusiasm. I am not
sure that he clearly saw Keynes’s weaknesses, analytical errors, and errors of
emphasis, that contributed a lot to the undermining of Keynesianism that came later.
But what this student could see was that Keynes was someone deeply concerned with
the great issues of his time, and attempting to grapple with these problems honestly
and urgently. This young man is so different from most of today’s graduate students
and economists.

RSJ, thank you and well said.

RSJ.1 But I remember what macro was like *before* we had rational expectations. We used mechanical models of adaptive expectations. We really did have models which said that people would underpredict inflation year in, year out, for ever, and that you could permanently reduce unemployement my making them underpredict inflation forever. Yes, the assumption that people can instantly deduce the implications of a model and new policy regime and form their expectations accordingly is equally problematic. (Though Lucas, e.g. was never a full-blown RE believer in this sense, as I interpret him.).

Rick: Suppose a physicist were trying to forecast the future time-paths of two pendulums (pendula?). The first has a lead weight; the second a feather. In a vacuum it would make no difference. But outside, on a windy day, it would make a lot of difference. Both could be modelled by an equation of the form Y(t)= R(Y(T-1),X(t)), where Y is velocity and X is wind. But the effect of X would be small relative to Y(t-1) for the lead pendulum, and large for the feather pendulum.

RSJ.2: I think you exaggerate a little, but there's too much truth in what you say, unfortunately.

"Read a random sample of papers every 30 years: 1920, 1950, 1970, 2000. You will see fewer and fewer economic phenomena described in more and more complex ways."

Here's the question: to what extent does "ontogeny repeat phylogeny"? In other words, does the progression from ECON1000, 2000, 3000, 4000, 5000, 6000 do the same thing? From first year to PhD, do we crank up the technique while dumbing down the content? To say we do do this is also to exaggerate, but there's too much truth in that statement too to be ignored.

I disagree with you on the Phelps/Lucas Island parable, by the way. I think it contains an interesting insight, by way of a metaphor.. And I've always thought Lucas 72 to be a brilliant paper, even if I never did understand the math. And even if I ended up rejecting the model. It changed how we think about equilibrium, and changed how we think about what policy is. Policy is not an event (a particular level of M, G, or T); it is a process describing how M,G, and T evolve over time.

Which all goes to confirm my view that we need more RSJ's in economics. Someone who thinks about the econ, but who wouldn't be snowed/scared off by the math.

My biggest disappointment as a prof is when I really turn a student onto econ, but that same student can't do math. Where the hell do I tell him to go? (My answer, pathetic as it is: blogs.) God help us all.

"Where the hell do I tell him to go?"

Tell them to take math courses! What's the math pre-requisite to get into econ? Do you guys do like the engineers and work the required math courses into the program? Maybe you need to increase the required math at the undergrad level so students are clobbered if they go on.

Nick,

I think you are completely correct about macroeconomic forecasting. However, I don't view this as a bad thing.

Fundamentally, as forecasters we use historical data to estimate the structure of the economy and then try to determine what we think will happen to a few exogenous variables in the short term to get an idea of where the general economy will move. The determination of these exogenous variables is the key part of economic predictions - as like anything that ends up with predictions they are really a value judgment.

Sometimes we find that a key variable follows some sort of auto-regressive process, or we have a view that it will move somewhere, or we just believe we have a good handle on where the number will go by magic.

That is why, when economic forecasters get different results, they will often argue about the path of some exogenous variable (the TOT, the exchange rate) and attribute the difference in forecasts primarily to this.

I don't view prediction as a major element of economics - even for economic forecasters! I believe that description of what has happened, and what a shift in an exogenous variable entails and would lead to (within bounds) is the focus. Describing the economic environment and painting risks. Of course, this is also the reason why I would never expect an economic forecaster to pick a sharp shift in economic activity, or any sort of structural shock.

RSJ
I agree with you in principle, but the problem is that economic math is lazy... not lazy in a rigourous sense of wheteher the equations balance but in application
In physics 3 things have to happen to have a vaible theory
I - The math has to work out, i.e. cannot be disproved by contradiction within its own logical framework
II - It has to fit nicely with already established theory or else disprove it directly (i.e. quantum to classical via the correspondance principle)
III - It has to be physically verifiable

Don't get me wrong, Physics now has the same issues as economics, namely string theory, in that we have all this "elegant" math and no way to verify anything.... in reality its imaginary so it violates III and cannot be taken seriously... the problem is that it is taken serioulsy and it takes up all the funding and best young minds (that's the economics of applied research for you though) and for me that isn't physics it is imaginary math.

And further, I guess, I define brightness, not by how elegant someone's math is but, by how simple the explaination of their math relates to and explains real scenarios. The problem isn't necessarily with economists not being that bright, its "fake" or "imaginary" economic researchers in general taking the easy way out collecting funding and not having to produce things of "real" value.....

Nick
A serious physicist would never try to model a feather pendulum outdoors classically, instead he/she would model a theoretical pendulum and define the conditions that the model requires to be accurate. In that case those conditions would call for a different model. They would likely use a chaotic system to try to determine the weather pattern or a statistical system to give a range of possible positions for a floating object confined to a specific volume and try to assign a probability for each position.... kind of like what they do for an electron around a nucleus only a little more ad hoc.

Nick,

I agree with the mathematical premise of your claim that: (a) if the reduced form of the model is Y(t+1) = R[X(t+1)], and with no additional information on X(t+1) it is not possible to forecast Y(t+1); (b) on this basis the only way to forecast Y(t+1) must be a forecast of X(t+1) from X(t); and (c) Endogeneity of Y will throw things off, so if Y(t+1) = R[Y(t),X(t+1)], we would also need Y(t-1) and X(t), ad infinitum.

That said, the reason why model-based forecasting provides some utility, in my view, is because: (a) The profession believes that Y(t+1) = R[X(t+1),X(t)], so that lagged values of X may be useful in predicting contemporaneous values of Y, even if we could not forecast future values of X precisely. This is why many economic forecasters pay attention to so-called leading indicators. (b) Moreover, if the task is to forecast the business cycle (as is often the case among developed-economy forecasters), there is a possibility that cycles (by definition) demonstrate some replicability across time, and hence Y(t+1) = R{X(t+1),f[X(t-3)]}, say. We may have a better sense of the functional form of f(.) than the data-generating process for X, in which case the forecast will be superior to a random walk. It is also in this sense that an economic forecast could imperfectly capture some of the model-based forecasts of physical processes (if, say, we believe that the forces shaping the evolution of business cycles are not just exogenous shocks but also structural variables). (c) While endogeneity is certainly problematic, we do have tools that have improved our ability to account for some endogeneity of processes (albeit in a limited fashion)---VAR and SVAR come to mind, and is used routinely in forecasting, and System and Difference GMM is also fairly widespread now. (d) Stickiness in the dependent and independent variables may also mean that lagged values of these variables offer some predictive value, so Y(t+1) = R[X(t+1),X(t),Y(t)].

A related thought: I think that the failure of economists to be able to predict *crises*, as opposed to the evolution of the economy either in calmer trend growth phases or in a post-crisis recovery phase, is precisely because the relative strength of the current forecasting paradigm (as described above) is the ability to capture the evolution of endogenous processes better than a naive random walk (except in the most complex and price-flexible markets, such as financial asset markets). This calls for us to perhaps better incorporate discontinuities into our current macro models (or incorporate more macro into our current crisis models), something that has yet to be systematically done.

I would not equate our current failure to build models that successfully forecast the macroeconomy with the fact that doing so is a fool's errand. If anything, the relative failure of our endeavors call for us to work even harder at developing better models. While knowing when to quit is certainly a valuable trait in poker, I do think that science advances by a willingness of its practitioners to humbly continue to try to push the boundaries, even when it seems like a fool's errand.

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad