Take just one example: the Calvo pricing model. In that model, the Calvo fairy visits each firm at random, taps it with her wand, and lets it change its price. The probability of her visiting in any period is 1/n, so she visits each firm on average every n periods. Firms know this and set prices rationally when she visits.
Make one very small change to that model. Assume she is not random. Assume she visits a fraction 1/n firms each period, and visits each firm once every n periods. Firms know this and set prices rationally when she visits. That's a different model. I like it better than the first model; but it's a nightmare to solve.
Both those models are equally microfounded. Or equally not microfounded, because the fairy herself is, well, just a fairy, and not a real person. She's an ad hoc fairy, who is just a metaphor for our ignorance about why the price of money (the reciprocal of the price level) doesn't behave like the prices of other financial assets, the ones that are traded on centralised exchanges against money. The price of money won't be determined like the prices of those other financial assets, because money is the medium of exchange and the medium of account. So money can't be traded on one centralised exchange with a price of its own. It wouldn't be money if it were. But I digress.
I don't really like either model. But given a choice between only those two models, I prefer the second model to the first. I think it fits the microdata better, and I think it fits the macrodata better too. You wouldn't think it would matter much whether she's a random fairy or a non-random fairy, but it does. The non-random fairy generates inflation-inertia and the random fairy doesn't. You get a sticky inflation rate, and not just a sticky price level, with the non-random fairy.
Trouble is, I can solve the first model, but I can't solve the second model. So if I had to build a formal microfounded macromodel, and solve it, I would be forced to choose the first model and reject the second model.
There's a trade-off between the microfoundations we like and the microfoundations we can solve.
What can I do? I have three options:
1. I can assume microfoundations I don't like (with the random fairy) and solve the macromodel.
2. I can assume microfoundations I like, or, at least, like better than the first (with the non-random fairy), and wave my hands and talk about what I think the macromodel would say if I could solve it.
3. I can write down an equation for an ad hoc Phillips Curve with inflation inertia, wave my hands and say that I think it is roughly what would happen with the non-random fairy, add it to the macromodel, [Update: wave my hands and say I think it is consistent with the agents' behaviour that underpins the other equations in the model], and then solve it.
I know, from previous experience, that sometimes I get things wrong when I wave my hands. Sometimes, when I do the math, the results turns out differently than I thought they would.
This isn't an easy choice.
Maybe that's an argument for a fourth option:
4. Same as option 2, except you also do a few computer simulations ("agent based modelling"?) as a check to see if your hand-waving intuition is roughly right.
Hmmm. When I started to write this post, I didn't think it would end up as an argument for that conclusion. I am prejudiced against that conclusion, because computer simulations are something I have never done and wouldn't like doing and would be no good at doing. Oh well. Someone else can do it. But I wouldn't trust any of their results unless they confirm my hand-waving intuition.
[Update: though maybe there's still a trade-off, between microfoundations we like, and microfoundations we can program the computer to solve?]
This post is a sort of follow-up to Noah Smith's good post.
I would note that many other scientific (and more practical) fields that deal with complex systems have gone route 4. Like climate, or hydraulics, or materials resistance. There is nothing wrong in going for simulations.
Posted by: Felipe | December 18, 2013 at 08:31 AM
Felipe: Yep. Here are my previous thoughts on the subject.
Posted by: Nick Rowe | December 18, 2013 at 08:40 AM
As someone who does simulations for a living (in the private sector) I would be very hesitant to trust anyone's model in such a data sparse field as macroeconomics. Because stochastic simulations are so sensitive to the initial conditions the "overfitting" problem becomes extreme. When you have only a couple hundred (or dozen?) data points and over five parameters it is almost too easy to replicate the data. That is why I always trust the accuracy of the parameter values (i.e. how stable is the price level) rather than the ability of the model to match the data.
Posted by: honeyoak | December 18, 2013 at 09:42 AM
honeyoak: the point of simulating here, though, is just to solve a model that does not have an analytic solution. There is no issue of measuring or calibrating stochastic model parameters ("fitting") from historical data. It's just a glorified numerical integration. The model remains exactly as (un)trustworthy as it ever was.
Posted by: Phil Koop | December 18, 2013 at 10:10 AM
Nick,
The "as a check" point is key here in my opinion. No model is "right". Anyone who thinks their model is "right" is deluded. Each model, hopefully, just helps piece together the picture. Using different approaches as checks on each other is a hugely valuable process. I find I learn most in economics when I come across a situation where model 1 says X and model 2 says not-X, because working out why usually tells me something I didn't know before.
Posted by: Nick Edmonds | December 18, 2013 at 11:51 AM
"4. Same as option 2, except you also do a few computer simulations ("agent based modelling"?) as a check to see if your hand-waving intuition is roughly right.
"Hmmm. When I started to write this post, I didn't think it would end up as an argument for that conclusion. I am prejudiced against that conclusion, because computer simulations are something I have never done and wouldn't like doing and would be no good at doing. Oh well. Someone else can do it. But I wouldn't trust any of their results unless they confirm my hand-waving intuition."
First, there are simulation systems that let the researcher specify the properties of the agents and run simulations without doing much or any programming. Certainly the nuts and bolts programming has already been done. It is not like the researcher needs to become proficient in computer programming. Mainly they have to be able to specify their model precisely. :)
Second, simulations are direct tests of the model. Not as a real world experiment, but as a thought experiment. The main question, it seems to me, is whether the simulations are qualitatively like real world events. Does a model of markets, for instance, exhibit periods of bull markets and bear markets?
Posted by: Min | December 18, 2013 at 11:59 AM
Phil Koop: When (blindly) using a stochastic for integration you are not guaranteed to get a solution. You could also get an infinite number of solutions. That is why researchers "fix" specific parameters to "reasonable" values while integrating over others. Otherwise, this is a grand exercise in tea leaf reading as the model is too complex to shed insight a-priori.
Posted by: honeyoak | December 18, 2013 at 12:27 PM
Maybe economists should back up a little bit and start with trying to predict the behavior of groups of chimps first. At least the chimps won't be reading your research papers or be trying to copy your models and then use them to their advantage. One less feedback loop to worry about.
Posted by: Tom Brown | December 18, 2013 at 02:10 PM
... BTW, how do you translate "easing monetary policy" into a zoo setting?
Posted by: Tom Brown | December 18, 2013 at 02:13 PM
Why can you not solve the non-random model?
I'm not an economist, but a mathematician, and pretty much any system of differential equation can be solved numerically with well-known techniques.
What about a numerical solution instead of a analytical solution?
Posted by: Eigenscape | December 19, 2013 at 05:37 AM
Eigenscape: (I am bad at math).
In the random model, there's a trick that lets you write down an equation for inflation as a simple function of expected inflation next period and output this period.
In the non-random model, IIRC, you get n lags and n leads in expected inflation and output. Yes, you can solve it, but it is an ugly great big equation. And then you have to solve the rest of that model simultaneously with the other equations. It's going to be an ugly mess, and you won't know what it means.
Posted by: Nick Rowe | December 19, 2013 at 07:34 AM
I think ABM will probably end up being the best solution to micro foundation models. That said, as of yet, I think they are basically equivalent to hand waving. Basically the process seems to be:
1. Specify an initial set of rules governing the agents
2. Come up with some Marco data to "calibrate" the model
3. Run the simulations
4. Tweak the rules until the model achieves the desired result
5. Run your experiment
6. Make your claims based on the experiment
In other words, ABM as practiced today in many applications is like data mining. Only it's more difficult to identify. At this point ABM is hand waving hidden under the veil of complex computation. It's no surprise to me that ABM or any type of micro founded model still looses in forecasting competitions. ABM has a long way to go. You can pretty much build a model to produce whatever result you want and there is little to distinguish the good models from the bad. I often wonder if there is a way to mix bayesian model comparison type methods with ABM.
Posted by: RAstudent | December 19, 2013 at 11:46 AM
RA: "You can pretty much build a model to produce whatever result you want and there is little to distinguish the good models from the bad."
That is in general true of model building. What makes ABM special in that regard?
Posted by: Min | December 19, 2013 at 01:58 PM
Min,
You have a point, I just think there is currently very little structure in ABM. It's all over the place and its difficult to separate the wheat from the chafffee. More so than in other areas.
Posted by: RAstudent | December 19, 2013 at 04:01 PM