I've never used Dynamic Stochastic General Equilibrium (DSGE) modeling techniques for pretty much the same reasons that Noah Smith outlines. I keep meaning to write a post about my misgivings about DSGE, and it appears now is the time. This is going to be a pretty technical and wonkish post, but there's really no way around that: the issue is technical and wonkish.
- Households and firms both face dynamic problems, and DSGE models force the analyst to specify laws of motion for the relevant state variables.
- Households and firms both face problems of optimal choice under uncertainty, and DSGE models force the analyst to specify the stochastic environment.
- The General Equilibrium part essentially says that things add up.
Reasonable people can disagree about the modelling choices made in these steps, but as far as I'm concerned, the chief advantage of DSGE approach is that it forces people to think about these three issues when studying the macroeconomy.
Okay, so you've specified the model - now what? If all the functions in your model are quadratic, then the all the first-order conditions consistent with optimal choices are linear. And if there is a solution to a linear system, we know how to get it.
There's a problem with that approach: you'll never get a model with quadratic preferences and technology published. The even bigger problem is that if you don't make the assumption that preferences and technology are quadratic, then you can't solve the model at all.
So instead of assuming the first-order conditions are linear equations, DSGE practitioners generally assume that they can be well-approximated by linear equations. The idea here is that if you zoom in 'close enough' to any nonlinear curve at a given point, you can draw a straight line that won't deviate 'too much' from the curve. If you stop the Taylor series at the first term, you get
f(x) ≈ f(x0) + f'(x0)(x-x0)
where f'(x0) is the first derivative of the function at a particular point x0.
Once you've gone this route, you have to persuade people that the region of interest is in fact the values of x that are close enough to x0 to justify using the linear approximation. One of the features of the model that we can extract analytically - that is, without using numerical approximations - is the long-run (stochastic) steady state (balanced growth path). Since the model usually predicts that the economy will eventually converge to this path, it's reasonable to think that it will spend most of its time in its neighbourhood.
Or at least, it sounds reasonable enough for a significant number of macroeconomists - probably a majority of them. But this is the part I've never been comfortable with. Firstly, there's the whole idea of an approximation to the solution of a model. I'm not against approximations - that's what models are. But that's just it: the model is the approximation. If a model performs poorly in a dimension of interest, then you can work on improving it. But what are you to conclude if an approximation to a model does poorly? That the model is over-simplified in some way, or that the numerical approximation you're using isn't appropriate for the problem at hand?
The other problem I had - and this is one where some progress has been made - is that the lack of analytical solutions to DSGE models made it hard/impossible to use in empirical work and forecasting. Models end up being calibrated to fit unconditional moments and not the actual data.
So what can you do? Noah Smith says "do econometrics instead", and that is in fact what I ended up doing. But I did take a stab at a more substantive solution, and I might as well tell you about it. You almost certainly haven't read about it before. (Non-economists should probably stop reading now.)
Suppose the state variable of a dynamic problem is x and the instantaneous return function is F(x,c) - c⋅p, where c is a control variable and where p is exogenous (c and p can be vectors). Suppose also that the law of motion for the state variable is x' = g(x,c), where x' is the value of the state variable in the next period. if V(x,p) is the value function, the Bellman equation is
V(x,p) = maxc { F(x,c) - c⋅p + β V(x',p') } s.t. x' = g(x,c)
where 0 < β < 1 is the discount factor and p' is next period's value of p. If we impose the law of motion directly into V(x'), then
V(x,p) ≥ F(x,c) - c⋅p + β V(g(x,c),p')
where the equality is strict for the optimal choice of c. Rearrange that inequality and assume for a minute - I'll get back to this - that p=p':
F(x,c) ≤ c⋅p + V(x,p) - β V(g(x,c),p)
This suggests a problem inverse to that of the Bellman equation:
f(x,c) = minp { c⋅p + V(x,p) - β V(g(x,c),p) }
Under certain regularity conditions, you can demonstrate that there is a dual relationship between F and V. But what I really liked about this approach is what you get from the first-order conditions to that minimisation problem:
c = - Vp(x,p) + β Vx(g(x,c),p)gc(x,c)
It's invariably the case that the law of motion - capital and/or asset accumulation - is linear in the control variable, so gc(x,c) isn't a function of c. If so, then we have what amounts to a version of Hotelling's Lemma - and an analytical solution to a dynamic programming problem.
This approach is due to Larry Epstein, who was teaching at the University of Toronto when I was a grad student there. He derived the basic duality theory in continuous time here and applied it to data with Mike Denny here.
I thought - and think - that this was brilliant. Instead of specifying a refurns function F and bashing out a solution using numerical methods, all you had to do was specify a value function and do half a page of algebra. Whatever properties you wanted to impose on F could be achieved by imposing the appropriate properties on V.
But there were two problems. One was that the original derivation was in continuous time. This wasn't an insurmountable obstacle, and I eventually figured out how to reproduce the duality theorem in discrete time. The other was static expectations. My first attempt at a workaround there was to suggest that while firms might forcast prices for given forecast horizon, there would simply assume that prices would be constant after the forecast horizon. It sounded plausible (to me, anyway) but it raised the question of how long that forecast should be. I tried to answer that question here (ungated version here).
Another way around the static expectations problem is to simply a specify a first-order Markov proces for p. The duality theorems go through as before, but at the cost of making sure that you choose a functional form for V that is sufficiently tractable to allow analytical expressions for E[V(x',p'|p]. I even wrote a paper applying the approach to the problem of dynamic discrete choice that has been residing in my desk drawer for 15 years and which I might as well post now.
And then I stopped. These papers were a lot of work and they weren't getting much any traction. But maybe there's a grad student out there who wants to do macro modelling, is wondering about alternatives to solving dynamic models and who may have better luck.
Stephen: "is that the lack of analytical solutions to DSGE models made it hard/impossible to use in empirical work and forecasting."
I'm thinking about your experience in light of my recent post on methodology, which talked about the virtues of models with predictions. Now there you were, realizing that there was a practical problem with the models you were working with, and you had two alternatives. One was to do brutally difficult and technical work that might solve the problem, but was hard to get people to read or publish. The other was to blog instead. You've certainly had way more impact taking the latter route. I'm wondering what it would have taken for the former route to work out for you?
I guess that's in some ways a sociology-of-economics question - what would it have taken for your answer to gain traction?
Posted by: Frances Woolley | March 29, 2013 at 09:27 PM
Stephen,
Thanks for this interesting post. You certainly know more than me about DSGE models, and I found this very informative.
I'd like to comment on one thing you've said. In an environment with inter-temporal maximization, general equilibrium has much stronger implications than simply ensuring that "things add up". That makes it sound innocuous, as though it were simply an accounting identity. In fact, it is a far stronger assertion that requires solving for a unique equilibrium time path across all periods (Samuelson, 1958).
As you said, reasonable people can disagree about modeling choices. I am neither for, nor against, general equilibrium in principle. However, it is more than a simple accounting identity, and it certainly does not come for free. While general equilibrium is an important tool, I suspect we may be miss many insights about the real-world economy by taking for granted the need for such a strong notion of equilibrium.
I'm sure you were well aware of all this, and I hope this doesn't seem too picky. Just thought it added to the discussion. Again, great post!
Posted by: Dan | March 30, 2013 at 05:05 AM
You, or your professor, have re-discovered Pontryagin's minimum principle, which I agree is brilliant. It is also well known that it is much harder to find stochastic solutions to a global approach such as this versus Bellman's iterative approach. However, it has been done and there are papers on stochastic pontryagin solutions that the budding grad student may wish to consult.
I don't think this has much to do with whether DSGE approach is a "dead end" as declared by Axel Leijonhufvud and many others, or how much insight can be gained from them. The problem is not the well-behavedness of the solutions to linear approximation but the well-behavedness of the model to deformations as well as the aggregation problem when many individuals with different endowments are each optimizing a different value function, each possibly using different approximation techniques.
In physics, there is one decision maker -- "nature" -- and one value function. But in economics, you may have 2 actors with mutually inconsistent expectations of future prices, different value functions, and each using a different approximation to determine their present day consumption. Even if there were complete markets for every possible opinion that someone may have about the future, nothing mandates participation in these expectations markets.
But the two aggregate solutions [(c_1(t), l_1(t)), (c_2(t), l_2(t))] of two optimization problems with two different expectations of future prices is not in general, the solution of a single multi-period optimization problem with a single expectation of future prices. Add in a hundred million decision makers, and it is easy to see why some people don't see a lot of value in this approach, and prefer to just look at known patterns in aggregate behavior.
Moreover, as every model is only an approximation to the underlying behavior being modelled, there is no point in considering models whose conclusions are reversed when small terms are added to the value function, or whose solutions differ discontinuously when they are calculated by optimizing over a truncated number of periods versus over all periods.
But dynamic optimization problems famously do not have this property in general. Other disciplines, such as physics, do restrict themselves to only considering certain functional forms in their Lagrangians for this reason.
Posted by: rsj | March 30, 2013 at 05:30 PM
rsj: No, that's not Pontryagin's minimum principle. The minimization is with respect to the *exogenous* variable p, not the control variable.
Dan: To the extent that equilibrium paths must be consistent with the laws of motion, GE still amounts to making sure that things add up.
Frances: That really is - as they say around these parts - la question qui tue. Maybe I should have gone to a higher-profile school for my PhD.
Posted by: Stephen Gordon | March 30, 2013 at 05:54 PM
It is interesting to note that economists assume unbounded rationality, but in fact the Taylor approx implies some kind of bounded rationality of the model builder.
Posted by: Johannes | March 31, 2013 at 05:52 AM
Interesting article. In fact, we face the exact same problem with economic growth models, endogenous or not. Indirect methods (Pontyagrin's Maximum Principle) that solve the underlying optimal control problem are somewhat limited due to the linearization/log-linearization and the numerical tools available boil down to solving ordinary differential equations derived from the necessary optimality conditions.
We are about to publish an article on the Journal of Economic Dynamics and Control that allows one to use a direct method to solve infinite-horizon nonlinear growth models without having to linearize. The NLP problem is fully nonlinear, with all its multiple-equilibria nuisances. But still, a fairer approximation of reality. Anyway, this allows for such interesting things as studying multiple sequential shocks or time-invariant tax policies (which in the analytical version are intractable problems).
Posted by: Mário Amorim Lopes | April 24, 2013 at 06:38 AM