« So we work harder in Canada, eh? | Main | More Greek barter evidence for monetary disequilibrium »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I would note that many other scientific (and more practical) fields that deal with complex systems have gone route 4. Like climate, or hydraulics, or materials resistance. There is nothing wrong in going for simulations.

As someone who does simulations for a living (in the private sector) I would be very hesitant to trust anyone's model in such a data sparse field as macroeconomics. Because stochastic simulations are so sensitive to the initial conditions the "overfitting" problem becomes extreme. When you have only a couple hundred (or dozen?) data points and over five parameters it is almost too easy to replicate the data. That is why I always trust the accuracy of the parameter values (i.e. how stable is the price level) rather than the ability of the model to match the data.

honeyoak: the point of simulating here, though, is just to solve a model that does not have an analytic solution. There is no issue of measuring or calibrating stochastic model parameters ("fitting") from historical data. It's just a glorified numerical integration. The model remains exactly as (un)trustworthy as it ever was.

Nick,

The "as a check" point is key here in my opinion. No model is "right". Anyone who thinks their model is "right" is deluded. Each model, hopefully, just helps piece together the picture. Using different approaches as checks on each other is a hugely valuable process. I find I learn most in economics when I come across a situation where model 1 says X and model 2 says not-X, because working out why usually tells me something I didn't know before.

"4. Same as option 2, except you also do a few computer simulations ("agent based modelling"?) as a check to see if your hand-waving intuition is roughly right.

"Hmmm. When I started to write this post, I didn't think it would end up as an argument for that conclusion. I am prejudiced against that conclusion, because computer simulations are something I have never done and wouldn't like doing and would be no good at doing. Oh well. Someone else can do it. But I wouldn't trust any of their results unless they confirm my hand-waving intuition."

First, there are simulation systems that let the researcher specify the properties of the agents and run simulations without doing much or any programming. Certainly the nuts and bolts programming has already been done. It is not like the researcher needs to become proficient in computer programming. Mainly they have to be able to specify their model precisely. :)

Second, simulations are direct tests of the model. Not as a real world experiment, but as a thought experiment. The main question, it seems to me, is whether the simulations are qualitatively like real world events. Does a model of markets, for instance, exhibit periods of bull markets and bear markets?

Phil Koop: When (blindly) using a stochastic for integration you are not guaranteed to get a solution. You could also get an infinite number of solutions. That is why researchers "fix" specific parameters to "reasonable" values while integrating over others. Otherwise, this is a grand exercise in tea leaf reading as the model is too complex to shed insight a-priori.

Maybe economists should back up a little bit and start with trying to predict the behavior of groups of chimps first. At least the chimps won't be reading your research papers or be trying to copy your models and then use them to their advantage. One less feedback loop to worry about.

... BTW, how do you translate "easing monetary policy" into a zoo setting?

Why can you not solve the non-random model?

I'm not an economist, but a mathematician, and pretty much any system of differential equation can be solved numerically with well-known techniques.

What about a numerical solution instead of a analytical solution?

Eigenscape: (I am bad at math).

In the random model, there's a trick that lets you write down an equation for inflation as a simple function of expected inflation next period and output this period.

In the non-random model, IIRC, you get n lags and n leads in expected inflation and output. Yes, you can solve it, but it is an ugly great big equation. And then you have to solve the rest of that model simultaneously with the other equations. It's going to be an ugly mess, and you won't know what it means.

I think ABM will probably end up being the best solution to micro foundation models. That said, as of yet, I think they are basically equivalent to hand waving. Basically the process seems to be:

1. Specify an initial set of rules governing the agents
2. Come up with some Marco data to "calibrate" the model
3. Run the simulations
4. Tweak the rules until the model achieves the desired result
5. Run your experiment
6. Make your claims based on the experiment

In other words, ABM as practiced today in many applications is like data mining. Only it's more difficult to identify. At this point ABM is hand waving hidden under the veil of complex computation. It's no surprise to me that ABM or any type of micro founded model still looses in forecasting competitions. ABM has a long way to go. You can pretty much build a model to produce whatever result you want and there is little to distinguish the good models from the bad. I often wonder if there is a way to mix bayesian model comparison type methods with ABM.

RA: "You can pretty much build a model to produce whatever result you want and there is little to distinguish the good models from the bad."

That is in general true of model building. What makes ABM special in that regard?

Min,

You have a point, I just think there is currently very little structure in ABM. It's all over the place and its difficult to separate the wheat from the chafffee. More so than in other areas.

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad