« Wot about the capitalists? | Main | Whatever happened to Price Level Path Targeting? »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I don't have any personal experience with ABM either, but I think that things are a little more nuanced than you think, Nick. Rather than having a black box, where we don't understand what is happening in the model, think of it as a grey box. We have some idea what is going on, but might not know everything.
I guess it is a little bit like the difference between intuition and formal proof in standard economic models. If we understand the intuition of your lost paper well enough, then it can still be useful even if we cannot recreate the formal proofs.

Is ABM really just a set of rules of thumb? That sounds pretty lousy to me, look at evolutionary economics for instance, peoples heuristics and expectations are adapting as the economy changes. At least with equilibrium models people's actions respond to changes in their incentives and expectations which in turn responds to changes in the economy or policy, even if they do with unrealistic immediacy or accuracy.

Nick,

As I read you, you're essentially asking what's an explanation or more fundamentally: what's knowledge? Let me therefore offer you my answer. To me an explanation is to derive something what we did not know from what we do know or from what is given. What is known and/or given is unique to each discipline as that is what defines and limits each discipline together with a set of phenomena.

To make a long story short I'd say: no, if you cannot reduce a recession to the result of human behavior then you have not provided an economic explanation. Worse yet, if your explanation is a black box, then you have not provided an explanation in any discipline.

You're quite right therefore to find it unsatisfactory.

I think the fact that we don't understand exactly how an agent-based simulation generates a recession is actually indicative of the usefulness of the simulation. If we could understand all recessions from first principles, agent-based simulations wouldn't be necessary.

To draw an analogy to math, given a differential equation we would ideally like to find an analytic solution. However, in many cases, we have to make do with numerical methods. We can still analyze the behavior of the system using numerical approximations, and gain macroscopic knowledge (attractors, periodic behavior, asymptotic behavior, etc.).

Brito: I think those rules of thumb can be fairly sophisticated. Like Least squares learning, which is like the agents running regressions to forecast. And I think some maybe include evolution in terms of survivorship.

Nick: fair enough. I think models like this would be much more useful to banks, pension funds, insurance companies, environmental economists and organisations like the CBO. On the other hand, I do not think they would be of much use to central bankers or macroeconomists focusing on more short run problems, where it's not the predictions that really matter, but the assumptions and the diagnostics which are more mundane (e.g. is there deficient demand? Is monetary policy loose or tight? Has the natural rate of unemployment increased? etc...)

Martin: "To make a long story short I'd say: no, if you cannot reduce a recession to the result of human behavior then you have not provided an economic explanation."

Well, they could reply that they have reduced a recession to the result of human behaviour. (Actually, and more so than models that start with aggregate relationships.)

I do not deal in agent based models, but I will guess that like in most things there can be good and bad ABMing.
Like a traditional (good) economic model helps us build intuition through results and comparative static exercices, a good ABM should help us build intuition by tweaking simulations and observing the results. ABMs could be the great macroeconomic experiment lab you have been dreaming about for so long, Nick!

Nick

This is an interesting post, and I'm glad you've written it. I see the point of agent based modelling to be able to answer that great Deirdre McCloskey critique - for too long economists have focused on why and whether, without asking that other fundamental question of, 'by how much'?

Why should we just focus on two categories recession and non-recession? How does interaction of economic actors sometimes produce 2 years of below trend growth, sometimes a one year collapse followed by resumption in trend, and sometimes a collapse to the tune of halving of nominal spending?

AB models don't really help you deduce the chain of recession logic. Chances are, your own theory of recessions has already been inbuilt into the model, as part of the simple but sophisticated rules/ heuristics that your agents follow. The aim is to see whether your model is able to produce the kind of results that economies produce, in a wide variety of circumstances. And hence to update the next iteration of your model from the corrections that the first simulation(s) suggests. And so on. You're updating your theory of recessions as you go along, but you're doing so through simulations, not deduction, or worse, as many economists are wont to do, from simplistic charts.

I am of the camp that you need theorems. Simulations, in and of themselves are useful for building intuition, calibration, and policy analysis. But if you don't have a theorem, then I don't think you have a theory.

It's possible to come up with models which have no closed form solution, and yet prove theorems about the qualitative properties of the solutions, and authors of agent based models need to deliver these types of theorems.

Nick,

definitely true and I agree with that: ABM is just a very messy and complicated explanation. However if all you have to go by is:

"Even if we do lots of different simulations, with different assumptions, and list all the cases where we do get recessions and list all the other cases where we don't get recessions, do we understand recessions?"

and the simulation itself is a black box, then you do not have an explanation. For we already know that lot's of agents interacting can result in a recession: that's what prompted the question in the first place.

It seems to me however that if you have to program it all, then you're in essence not doing much else than comparative statics? The outcome of a program seems to me to be no different than a version of the artsy non-linearity you discussed earlier.

We will know therefore what 'caused' a recession when we figure out what to change about the program to avoid the recession. This is no different than stating that an excess demand for money 'caused' a recession.

There's quite a few strategies to effectively learn about what causes a phenomenon in an ABM; they all, I think, stem from understanding what an ABM really is: an experimental system where you control all inputs, and are free to change things as needed. That means that learning from a given ABM means making hypotheses about what assumptions, rules, etc. are necessary to get a given outcome, then testing those hypotheses via controlled manipulation of the actions and assumptions of your agents. It also means using 'scaffolding' (HT to Andrew Gelman for the term); that is, creating a set of models with different but related assumptions, as well as varying degrees of complexity; by using this sort of strategy, we can learn how robust an observed relationship is to various perturbations. These scaffolding models can easily include simpler analytical models, or verbal descriptions of mechanisms.

Also, regarding the "black box" issue: this depends pretty heavily on code-sharing practices. If the researcher has done the work to provide code (and make sure it's reproducible!), make sure the interface to the code is easy to understand, and that you can observe the internal state of the model easily, I don't think it'd be any more inaccessible than a mathematical model. It just requires slightly different standards and infrastructure.

Eric: "If the researcher has done the work to provide code (and make sure it's reproducible!), make sure the interface to the code is easy to understand, and that you can observe the internal state of the model easily, I don't think it'd be any more inaccessible than a mathematical model."

OK. I expect I was (implicitly) assuming the code was reproducible. So anyone else can repeat the "experiment". (Sometimes I find mathematical models inaccessible too. I don't trust them, unless I can "get" the intuition as well.)

I think you have to accept a black box not as an explanation but as a path to one. It involves a lot of probing of it, exploration of its state space, exhaustive enumeration, until one can formulate a model of a model. No one considers a weather simulation an explanation but it useful as a prediction and errors useful as probes of explanations.

I suppose one point of the modeling is to use computation to generate interesting patterns macrophenomena which can themselves then be empirically tested. Even if nobody can hold the deep structure of the explanation in their heads, it might suggest interesting phenomenological patterns that would not otherwise have been dreamed up from pure cogitation alone, unassisted by computer power.

I see where you're coming from here Nick. But I'd suggest that there are some cases--hopefully rare--in which it's impossible to open the black box, to gain a macro-level understanding, to write down a "higher level" description, to "see the forest for the trees". There are some problems that we can solve by brute computational force, and only by brute computational force, and *know* that we've solved, but yet not comprehend in the slightest *why* the solution is correct. I have an old post on this, based on what I think is a pretty clear-cut example: chess endgames.

[link here NR]

Perhaps of interest:

[link here rajiv sethi pdf NR]

http://www.unitn.it/files/2_08_gaffeo.pdf>[link here pdf NR]

Problems as seen by a practitoner:

[link here pdf NR]

Challenge 1: Fragile Parameter Estimates.


The fragility of parameter estimates potentially translates into other objects of interest such as inference about the sources of business cycle fluctuations, forecasts, as well as policy prescriptions.Thus, accounting for model uncertainty as well as for different approaches of relating model variables to observables is of first-order importance.

Challenge 2: Aggregate Uncertainty versus Misspecified Endogenous Propagation.

The phenomenon that the variation in certain time series is to a large extent explained by shocks that are inserted into intertemporal or intratemporal optimality conditions is fairly widespread and has led to criticisms of existing DSGE models...

Challenge 3: Trends.

Most DSGE models impose strict balanced growth path restrictions implying, for instance, that consumption-output, investment-output, government spending-output, and real-wage output ratios should exhibit stationary fluctuations around a constant mean. In the data, however, many of these ratios exhibit trends. As a consequence, counterfactual low frequency implications of DSGE models manifest themselves in exogenous shock processes that are estimated to be highly persistent. To the extent that inference about the sources of business cycles and the design of optimal economic policies is sensitive to the persistence of shocks, misspecified trends are a reason for concern.

Challenge 4: Statistical Fit.

Macroeconometrics is plagued by a trade-off between theoretical coherence and empirical fit. Theoretically coherent DSGE models impose tight restrictions on the autocovariance sequence of a vector time series, which often limit its ability to track macroeconomic time series as well as, say, a less restrictive vector autoregression (VAR).

Challenge 5: Reliability of Policy Predictions.

More generally, to the extent that no (or very few) observations on the behavior of households and firms under a counterfactual policy exist, the DSGE model is used to derive the agents' decision rules by solving intertemporal optimization problems assuming that the preferences and production technologies are unaffected by the policy change. In most cases, the policy invariance is simply an assumption, and there is always concern that the assumption is unreliable. This concern is typically exacerbated by evidence of model misspecification.

Lets say you had some real microfoundations and the aggregate behavior of the individuals in the model happened to look exactly as (your favorite version of) the representative agent, but that this were derived through the “black box” – does this mean that models which previously were seen as providing an explanation now no longer do?

If this is the case, is it reasonable to assume that current DSGE models are providing any explanation?

In hard sciences, agent-based models are very popular. The solution to your problem with agent based models is obvious: the paper includes an online link to the code for the model. That way anyone can download the code and re-run the simulation (or variants of it).

Yes, you have to learn how to program a computer to run these simulations. But someone like you can easily employ a PhD student to do it for you.

The key to a good agent-based model is to have a full and accurate understanding of the mechanisms you are trying to model. That's why it is important to know that loans create deposits, rather than the other way around as described in the textbooks.

The other point to make about complex systems is that many patterns emerge from the complexity - they are not intuitive. So you actually require a computer model 'black box' to understand how certain patterns can arise from simple rule-based interactions between agents.

This is example about having lost all knowledge is interesting, and believe me that it actually happens in hard sciences too. Actually one of the most interesting problems in mathematics is based exactly on that - Riemann Hypothesis: [link here NR]

He formulated the hypothesis and said the following: "…it is very probable that all roots are real. Of course one would wish for a rigorous proof here; I have for the time being, after some fleeting vain attempts, provisionally put aside the search for this, as it appears dispensable for the next objective of my investigation."

And the proof for this claim eluded mathematicians for 153 years and it eludes them them till today. Nobody know if the hypothesis holds. Powerful computers are using brute force to compute yet more precise solution of the equation just to check if it holds. This process could be viewed as something akin to what you suggest. We have a program in computer that is designed to give us knowledge about important piece of knowledge. The program does not have to be intelligent, it just has to be good enough to provide us with insight to a problem that is too complex for us to understand in simpler way. And it could have large impact on the real world too. Random energies observed in quantum mechanics show behaviour that could be described by Rieman Function. Having a proof that Rieman hypothesis does not hold - even a artificially constructed one - could be a very important thing.

I would like to echo what J.V. Dubois said above.

I think it's a mistake to believe that all problems are solvable or understandable by intuition... The economy is too complex and we have too little information about it to actually have any hope of being able to understand it.

I think agent-based modelling gives us an experimental tool. We might not be able to analytically solve a problem, but we can simulate it and see what the output looks like. I am not an expert, but monte-carlo modelling is pretty common in physics, where some problems are unsolvable analytically (even trivially sounding ones like calculating orbits in presence of many bodies with similar masses). If you can get your simulated model to be similar enough to reality, you start to get some confidence that the micro-foundations you have are actually correct. You can than use that knowledge to evaluate the impact of policy changes.

You still have the problem about deciding whenever a simulated output is similar enough to reality. And I think you need much more detailed measurement of reality before we can really settle that issue. I am not sure there is enough richness in official statistics to get confidence in the ability of simulations to replicate reality.

I actually think that online-gaming economies might offer some great dataset to test such modelling ability and might get some insight in the real world, but I am not sure, we don't really get the option of stop playing in the real world.

Nick,

here's an agent-based paper that I hope might seem like less of a black box

[link here pdf it's by Gintis so it is almost certainly good NR]

I'm taking a bit of a risk of getting egg on my face because that's not actually the paper I had in mind and I've only skimmed it, but I think it basically says that if you set up little agents with easy to understand objectives and decision rules and let them potter about deciding whether to trade and at what price, we see a Walrasian-like outcome. [the half-remembered paper I had in mind but cannot find did not have the evolutionary element this paper has].

I don't think this complete escapes the blackbox problem, but I think if what the agents are doing is easy enough to understand and based on commonly understood economic concepts, we can really start to learn things about what phenomena emerge under what conditions.

[I think you agree with my supervisor's view I related in those comments?]

here is an agent-based post-doc, which sadly I don't think I'd get if I applied, but might suit somebody on this thread.

As long as the maths is run correctly, it does not matter that the model is a black box. Maths is just like a pan where you put oil and corn, turn the heat, and you get pop-corn. It's not important that you don't see the corn pop... What really matters is that you put corn in the pan, not rice or coffee beans...

What I'm trying to say is that it is all in the assumptions. Once they are laid, it's just a matter of turning the heat. You're right that papers go from introduction to conclusion, without offering a proper understanding of the mechanisms at play. But the mechanisms are not in the maths, they're all in the assumptions. A tiny variation in the defined behavior of the agent can radically change the conclusions.
An economic paper should have a 20-page introduction and a half-page conclusion, with all the maths in the appendix.

Take the example of rational expectations. Well, if the agent is rational, you can't fool him, right? (at least not twice) And all the conclusions of all the models based on this assumption are straightforward, it's not even worth doing the maths. But is it ok to consider the agent not rational? Not really, since he's not a complete fool either...
This means that rational expectations is a good idea, but you can't use it just like that. You need to model a near rational agent (which is technically more complicated), and that should lead you to more relevant conclusions.
The problem of the black box is not that you don't see inside it, it's that as long as you're looking into it, you don't notice what goes inside.

Jeremy: "But I'd suggest that there are some cases--hopefully rare--in which it's impossible to open the black box, to gain a macro-level understanding, to write down a "higher level" description, to "see the forest for the trees"."

"Seeing the forest for the trees" is a good metaphor for what concerns me.

Take a complicated math model, for example. It's not enough just to wade through it, equation by equation. We also need to stand back and try to get the intuition for the "big picture", and why it gets the results it does.

Luis Enrique: "[I think you agree with my supervisor's view I related in those comments?]"

Yes and no. Let's say I sympathise with his view, but don't fully agree with it.

It's always assumptions in ABM, too, and robustness testing is rarely carried out in ABM well.

There are basically two kinds of macroeconomic ABM, one with highly stylized assumptions which try to minimally explain a host of phenomena - it is helpfully possible to tie in e.g., distributional patterns of business size, lifespan, etc. The other kind is to use highly detailed assumptions built off micro data and still extract persuasive aggregate behavior.

The methodological trade-offs of tractability vs. adherence to reality, or Occam's Razor acting to stylize input assumptions or asserted aggregate behavior, plague ABM just as they do completely-solved closed-form macro. ABM just moves one step away from universality in the parameter space in exchange for being able to pursue non-completely-solvable assumptions. That is valuable, but claims by ABM proponents that parameter fragility or stylized assumptions are fatal for mainstream macro are often likewise the case in their own models.

BT London: " That's why it is important to know that loans create deposits, rather than the other way around as described in the textbooks."

That's why it's important to read a first year textbook, so that you learn that what you said there isn't true. Agree or disagree with the money multiplier model, the first year textbook version of that model does say that an increase in bank loans creates an increase in bank deposits.

(And that is the end of the discussion of that topic on this thread.)

Somebody in the "Read a First year Text" comments posted a link to a very good (redundancy) Peter Howitt paper describing the agent based modelling he did with Robert Clower, and why he did it. Now I can't find it. Help. Thanks.

So, so, sorry! Just a last [maybe-not-related but just-important] link, about the role of computers in the [hard and not-so-hard] sciences: http://www.math.pitt.edu/~thales/papers/turing.pdf

Is there some kind of digital divide between economics and the rest of sciences?

david: "That is valuable, but claims by ABM proponents that parameter fragility or stylized assumptions are fatal for mainstream macro are often likewise the case in their own models."

I think point about "fragility" (Peter Howitt calls it "brittleness") is an important point. And I think that's why we also need some sort of "intuitive" or "let's see the forest too" understanding of *any* model.

"Fragility" means that a tiny change in the assumptions causes a massive changes in the conclusions of a model. I don't like fragile models. (But maybe that fragility is telling us something important.)

it wasn't this Howitt paper I linked to was it? it does discuss work with Clower, but not sure what you mean be (redundancy) so maybe it ain't.

The closest analytical scientific field to agent based modelling is statistical mechanics; which, like economics, deals with systems with millions of interacting sub-units. The interesting thing about statistical mechanics is that you can get 'emergent' behaviour out of the models, behaviour that is not obvious from the modelling inputs. Sometimes the modelling output can give simple output relationships that can then be derived analytically, so you don't actually need the model after all.

One of the most interesting agent based models is that of Ian Wright, which from a very simple set of rules builds a model that gives outputs that match real economies well:

[link here NR]

My own work uses very simply specified models, but appears to give good explanations for income and wealth distributions and company size distributions, as well as explanations of boom/bust capital cycles. Somewhat to my surprise, a very simple formula emerged from the model that explains Kaldor's fact of the constancy of the returns to capital and labour. Although the formula came from analysing the model, it proved trivial to derive the formula in half a dozen lines from basic economic identities - the formula is not dependent on the model. Interestingly a variant of the formula suggests a direct link from increasing consumer debt to increasing inequality. More information at:

[link here pdf NR]

or google 'why money trickles up':

Luis Enrique; Yes, that's the one! Thanks.

(The "redundancy" thing was a little joke. Peter Howitt is a very good economist (he taught me advanced macro, ages ago). So to say a paper is very good is redundant, if you have already said it's by Peter Howitt, because you are repeating yourself.)

Geoff: "Somewhat to my surprise, a very simple formula emerged from the model that explains Kaldor's fact of the constancy of the returns to capital and labour."

That's a great story, with a very happy ending. Because in that case we *do* get the intuitive understanding as well. In that case, agent-based models and understanding are complements, not substitutes.

"agent-based models and understanding are complements -- yes.

We in the physical sciences have wrestled with this for a long time - centuries, even, because the issue isn't just with simulation, the issue is with experiment vs theory. You prepare some inputs, push the button, and out comes an output. Great. So now what?

Unfortunately, simulation is comparitively new, and so "best practices" for computational work are just starting to come together.

Right now, the level of quality for a lot of computational work - in all fields - is uneven. For instance, and probably prompting your question, in all disciplines one often sees a simulation paper where the author runs one simulation, says "see! When you put in A, you get out B!", plots some pretty pictures, and concludes.

These papers make me wish academics had a demerit system. They should, of course, never be published.

A better paper would go like this: Using well understood simulation software X, we wish to see what happens when we put A in. So we put a range of things like A in, over a suite of different simulations, and out comes a range of outcomes centred around B. Ok, interesting. But if we put A' in, we don't get B; we get B'. We have a hypothesis about this; we thing that A causes B but A' doesn't because of complex intermediate step C. So let's modify our simulation to not have C (by modifying the inputs, or the behaviour of the agents, or what have you). Oh, look; When we have A, but not C, we don't get B anymore! Thus we have evidence that A gives you B through C.

The upside to doing simulations is because they're controllable theoretical experiments, and that way you get to twiddle and tweak and use the changes to improve your understanding (given your underlying theory). They're ways to understand how the *theory* plays out in complicated situations, the same way physical experiments give you the ability to understand how the physical world plays out in complicated situations.

I'm involved with software carpentry ( http://software-carpentry.org ), which teaches basic computational skills to researchers (mainly graduate students) in many different disciplines; we do the same at different levels for Compute Canada. Email me if you think a workshop would be useful in your department.

Jonathan: best comment of the post! Makes a helluva lot of sense to me.

Interesting thread. Perhaps you could summarizes some of the more interesting comments/discussions that come out of it?

As someone with a background in engineering, it seems totally crazy to me that any company worth it's salt will tend to build a fairly detailed software simulation of anything it's going to build (e.g., buildings, wireless networks, circuits and so on) but we don't do this for the economy as a whole.

How is it that very important debates about Fed policy, fiscal policy, the ECB, and so on have basically no simulation analysis to see if the differing points make any sense? Aren't these topics worth a few million dollars of funding to build a reasonable size simulation?

Sure, the simulation won't be perfect, parameters are hard to estimate and so on. But having differing camps each put forward their own simulation advocating austerity or fiscal expansion or whatever would seem to be a big step forward as compared to the current arguments. As a previous poster mentioned, if you could at least say such and such a model produces such and such an effect that would seem much more useful than each economist saying that his or her oversimplified equilibrium model predicts such and such when it is clear we are nowhere near equilibrium at times of crisis.

It seems to me that there must be a good reason why such agent based simulations haven't taken off. Is it the difficulty in building the simulations, the fact that "simulations aren't real research", or something else?

Regarding the point of the article that simulations are black boxes, I think you have a reasonable point there. But I think a fundamental issue with the actual economy is that it is an inherently complicated collection of interacting pieces. If we can't build a simplified simulation and either adapt existing models to understand that simulation then what makes us thinks that our existing models actually apply to the even more complicated case of the real world?

Thanks,
-J

The worst papers are the ones showing that agents following rules of thumbs can optimize production functions and reach equilibrium. The lack of self awareness exhibited by the authors is stunning....

There is a large class of algorithms that converge a optimization problem to equilibrium. In normal fields we care about efficient algorithms...

I once had to suffer through a long talk by a tenured professor using genetic algorithms in agent based models. The paper upon which the talk was based showed that evolving agents solved for the optimal equilibrium. Why I suffered through this I do not know. The first genetic algorithm was invented in the 1950s and it was then shown that genetic algorithms solved optimization problems and since then lots of work was done on convergence rates.

Okay okay, I listened to this talk waiting for the angle that made the work worthwhile... Nothing, she had the agents mating by taking their production functions and swapping digits of their consumption decisions eg one agent did 49 and the other 53 so the offspring do 43 and 59.... (granted among some other things)

There was a long discussion about the convergence of this process... Not one citation into the real literature on this subject where you can find very good bounds on the probability of converging after n iterations, etc. (this field has been mature by the 80s) and without the cloak of economic gibberish...

The A, A' thing is equally problematic with analytical and intuitive solutions.

Assume two companies.
If they chose quantities the price ends up between the PC and monopoly price.
If they chose price we end up with the PC price.
If we introduce one second search time, we end up with the monopoly price.
Etc. Etc.

If simulations have to show that they are robust to alternative assumptions, why dont we demand the same thing from analytical and intuitive solutions?


I think that it’s a pretty safe bet that, eventually, building understanding and simulations at the agent/transaction level along with modern computational power will eventually come up with the economic equivalent of Computational Flow Dynamics in aerodynamics. (computational fiscal dynamics?) which will itself undergo evolutionary development. J Doyne Farmer provides a good example of this path with his INET grant to start with modeling the crash in the housing market.http://ineteconomics.org/video/30-ways-be-economist/doyne-farmer-macroeconomics-bottom but its a massive, long term task.

It appears to me that one can draw some intellectual parallels between the current state of economics and the development of aerodynamics prior to 1900. Aristotle started it all in 350 BC when he described a model for a continuum and posited that a body moving through that medium encounters a resistance. Things have progressed somewhat, but until very recently, if one flew it was courtesy of calculations of lift and drag that depended largely on Lanchester’s 1907 analog of vortices off a wing twisting into a trunk at the wingtip, coupled with some very elegant math from Kutta followed by Prandtl’s brilliant 1904 visualization of the boundary layer.

While the Euler equations for inviscid flow and the Navier-Stokes equations for viscous flow were well known by the middle of the 1800′s there were no known analytical solutions for these system of nonlinear partial differential equations, and thus were of no use to the early pioneers of flight. Thanks to Prandtl’s brilliant analogy of flows, it became possible to derive sufficiently accurate engineering solutions to give us the modern aircraft, albeit at the cost of ignoring some of the finer details of physics.

Thanks to modern high speed computers, we now have computational flow dynamics and can obtain engineering solutions for practically any aircraft configuration which can then be ‘built’ on a computer and flown in a simulator. Referring to the equations for momentum, continuity and energy, CFD can be defined as, “…the art of replacing the…partial derivatives…in these equations with discretized algebraic forms which in turn are solved to provide numbers for the flow field values at discrete points in time and/or space” (Quote from “A History of Aerodynamics” by Anderson http://tinyurl.com/d6beby9. An excellent book for anyone interested in the development of science and technology)

I don’t think that economics has yet had is “Prandtl moment” which, in aeronautics, enabled physics and math and engineering to converge to develop useful solutions, and what is needed is the development of sufficiently powerful analogs such as Prandtl’s boundary layer that will encompass sufficiently accurate simulation and analysis of the flow field of transactions to develop ever more refined ‘engineering’ solutions.

It’s going to take a lot more work and cooperation across disciplines to develop understanding of the flow field of transactions to maximize social benefit rather than just extracting maximum individual profit from the flow. Modern aeronautics didn’t leap directly from the Wright Brothers to the 747, but followed an evolutionary path of theory, experimentation, simulation (wind tunnels) and practical experience. My work in economics has been to look at development of a very basic economic ‘flight simulator’ for environmental policy. Here's my brief piece on economic simulation here. http://oecdinsights.org/2012/06/27/going-with-the-flow-can-analog-simulations-make-economics-an-experimental-science/

JRHulls

QCD is another example My limited understanding is that physicists can't solve the equations, so they simulate them. It's apparently good enough to sort out the Higgs from the giant mess that spews from the proton-proton collisions at CERN.

I think analogies to computational physics are misguided. Economic systems have several huge disadvantages versus physical systems, including:

1) It is difficult or impossible to discover the laws governing the behavior of individual pieces of the simulated universe.
2) It is difficult or impossible to estimate the parameters of those laws.

They have another, even larger disadvantage:

3) Those parameters evolve over time. For many of them, it is impossible to predict their evolution, because knowledge of where they will go co-implies that they've already gone there. For example, we can't know when or if we will discover workable nuclear fusion power until we actually discover it.

All of those pale in comparison with the biggest difference:

4) The components of an economic system are intelligent agents who are necessarily capable of predicting the outcome of the simulation. They are actively engaged in the same project you are - discovering how to predict the behavior of the system.

The last one means that you can't just specify the state of the world at t=0, specify rules for evolution of that state, and let go. Ever. It doesn't work. You have to include some ratex-y mechanism in there, which is something the engineers never have to do.

This isn't to say that you can't use toy models to study specific qualitative phenomena, but the idea that we'll do for the economy what we do for airfoils is absurd.

Alex: "4) The components of an economic system are intelligent agents who are necessarily capable of predicting the outcome of the simulation. They are actively engaged in the same project you are - discovering how to predict the behavior of the system."

Yep. That is indeed the biggest difference. (And I especially like your phrase "They are actively engaged in the same project you are --") RE "solves" this problem by assuming they have solved it (at least the reduced form, if not the structural equations). But rejecting RE doesn't mean that we can ignore this. There's a self-referential element in economies.

Physicists say that you can tell how good a theory of gravity is by seeing at what N it cannot solve the N-body problem in closed form. So Newton could get an analytic solution for N=2, the two body problem. Einstein could get an analytic solution for N=1, but more modern theories usually break down when N=0, the vacuum state.

In the real world there aren't any analytic solutions. The real world doesn't have theories. Things just happen, and things can observe things. The real world is a black box. There are only a handful of closed form solutions, and "equilibrium" is usually a matter of contingency, not necessity. (The biologists got over this. Economists haven't even faced it yet.) This made a lot of people uncomfortable, mainly scientists. Engineers never got serious about the whole solution thing in the first place.

It's actually high time economists moved into the 20th century. Monte Carlo simulation has been around for nearly 70 years now. Agent based modeling might get us a few less economic theories that strongly violate the basic principles of accounting.

----

There's a problem with the phrasing of:

"4) The components of an economic system are intelligent agents who are necessarily capable of predicting the outcome of the simulation. They are actively engaged in the same project you are - discovering how to predict the behavior of the system."

It's not that they are capable of predicting the outcome of the simulation. That would get you into the Halting Problem and you could them prove such entities could not exist. They do TRY to predict the outcome of the simulation, but they usually do a crummy job. (Why isn't there a Rational Expectations Fund?) When John Von Neumann, a pioneer in computers and simulation, looked at the financial system, he wound up developing game theory. The much vaunted fundamentals of the market mattered so much less than the market strategies. No one makes money on the fundamentals.

Kaleberg: it's worded correctly but could have been clearer. The economic system is the real economy. The simulation is a computer program containing a proposed model of the system.

(Aside: the halting problem isn't the issue. It's entirely possible to have a system that can make proofs about itself; it just depends on what kind of proofs you are looking for and the complexity of the system. The issue you're getting at is a more general information-theoretic constraint that a system cannot precisely forecast its own behavior; but a system CAN accurately forecast its own macroscopic behavior, which is what we care about.)


I was impressed by the Geoff Willis link, so I'm reposting it


http://www.econodynamics.org/sitebuildercontent/sitebuilderfiles/bullets.pdf

To whet peoples appetites, this derivation of the Bowley ratio is from the paper:

the Bowley ratio = total earnings / Y
the profits ratio = total profits / Y

By definition the Bowley ratio + the profit ratio = 1
&beta + &rho = 1

the profit ratio = the profit rate / total return rate
&rho = r / Γ

Y = earnings + profits ⇒ by definition, so: earnings = Y - profits
e = Y - &pi

e / Y = 1 − &pi/Y ⇒ so: Bowley = 1 − &pi/Y

but:
Consumption = Income ⇒ C = Y so:
&beta = 1 − &pi/C

or:
&beta = 1 − (&pi/W)/(C/W)

so:
Bowley = 1 − profit rate/ consumption rate

&beta = 1 - r / &omega

results

• the consumption rate &omega defines Γ; the ratio of total income to
capital.
• r is smaller than &omega - gives Bowley ratio between 0.5 and 1.0 –
matches real values



HTML Greek letters???

Can we understand agent-based models? Start with Schelling's "Models of Segregation". The model was computed with coins on a piece of paper; it's generally thought to be understandable.

Agent-based models are tools for abduction, not deduction. Complaining that they don't lead to theorems or astronomical-type predictions is complaining that your spice rack is not a good can opener.

Following from that, the phrase "black box agent-based model" is oxymoronic. If you don't know and find plausible (for reasons of behavioural psychology, microeconomics, etc., external to the model) the rules that the agents and their interactions are following, you're not doing agent-based modelling. Instead, you're performing a rite in a cargo cult.

In ABM, you *always* know exactly what settings are different between the model runs that produce recessions and the model runs that don't. But that is where the science starts, not where it finishes: it tells you where to start looking for the answer to the "why?" question.

Alex Godofsky's point 3 badly overstates its case. In reality, *no* invention or event of any realistically possible kind (barring global thermonuclear war) can make a difference of more than a few percentage points in any direction over the next few years. This is easily handled within the ensemble of model runs.

Godofsky's particular example of workable fusion power, if demonstrated tomorrow, would have no noticeable effect on the macroeconomy for at least 20 years, while pilot plants are built, safety regulations and permitting procedures developed, IP battles fought, challenges from the coal industry fought off, investors found, staff trained, factories built, etc. After 20 years optimistically 0.001% of global electricity would come from fusion reactors. This particular possibility can be ignored in any medium-term model of the macroeconomy.

Likewise, the evidence from behavioural econ (as I understand it) is that Godofsky's point 4 overstates the difficulty too. Economic agents are interested in their own situations, not the global outcome; they use simple decision rules and are predictably fallible and variable. Points 1 and 2 are moot for the same reasons.

Finally, the idea that agent-based modelling is unusable because there is no mathematical infrastructure for it is simply pathetic. Do what physicists do whenever they lack the mathematical theory for a promising new line of attack: invent the mathematics!

I like the parallel with Searle' chinese room (really one of the most thoughtful thought experiment I know) but I'm not sure your argument applies only to ABM. The chinese room thought experiment shows that we cannot reduce semantic to a computation (a syntax system). Now, by the Church-Turing thesis, any computation is a deduction and of course a classic analytical, equation-base model is a deduction. I think that there is no qualitative difference : the model itself, its syntax, has no meaning. We ascribe a meaning to a mathematical model 1) because the logical rules are transparent and 2) we are able to define relations of similarity between the model and the target system. The same is true for an ABM with the difference that things are more complicated : there are more variables, the logical operations are done by the machine. My point is that the difference is epistemic but is not related to the "ontology" of an ABM.

C.H.: You are the first commenter to pick up on the Chinese Room. Finally. I am so pleased!

There seems to be a lot of confusion on these posts about simulations vs. math models. Reiss’ paper ‘A Plea for (Good) Simulations: Nudging Economics Towards a More Experimental Science [link pdf NR] provides a useful access point and general discussion. Some of the earlier commenters have put forward the position that if you cannot predict future changes, the agent based model is somehow not useful, or that the agents are building their own simulations which invalidate the simulation. (Old science-fiction/Matrix-like plot here...when we run our simulations, are we really running a simulation or are we ourselves just modeled agents running simulations in someone else’ simulation, just trapped in a form of Chinese Room, incapable of knowing the reality of where or what we really are.) These objections may be somewhat true in a math, or ‘pen and paper’ model, as Reiss puts it, which is why he makes the point that a simulation need be far less constrained to produce useable results, and that many economic models are overconstrained just to make the math work. Reiss’ paper does an excellent job of defining the advantages of simulations as well as some of the limitations.

Analog simulations are a special case, and here I would refer to Einstein and Infeld’s 1938 classic,’The Evolution of Physics: From Early Concept to Relativity and Quanta’. It is firmly grounded in observable real world examples and analogy, as is wonderfully demonstrated by the chapter on quanta, which takes us from flopping rubber tubes to violin strings to probability waves. “It is easy to find a superficial analogy which really expresses nothing. But to discover some essential common features, hidden beneath a surface of external differences, to form, on this basis, a new successful theory, is important creative work. The development of the so-called wave mechanics, begun by de Broglie and Schrodinger, less than fifteen years ago, is a typical example of the achievement of a successful theory by means of deep and fortunate analogy”.

Which brings me back to Phillips and his Moniac. Here is a paper on the use of Phillip’s hydromechanical simulator to teach system dynamics. [link pdf NR] (These folks are always using bathtubs and hot and cold taps to explain things to people as mentioned in the abstract) The description of the integrative function of the machine and its ability to literally ‘turn off’ portions of the economy (including many omitted in DSGE models) and literally see the result is well described and illustrated in the paper. The paper also contains links to a demonstration of the Phillips machine by Dr. Allan McRobie, which I have referred to previously. The use of a fluid dynamics analogs merely adds more capabilities to Phillips basic concepts, such as flow rates, density and shear forces and the ability to continuously exchange potential and kinetic energy, all useful analogs in an economic model.

If, as appears to be the case with our gracious blog host, you are not one of those enamored with digital computers and mathematics for their own sakes, analog simulators such as Phillips can be a wonderfully visible relief. I’d be very cautious in placing limits on our rapidly increasing ability to simulate, as opposed to model, very complex structures. When the CFD people first started to get really serious, only a few decades ago,there were lots of jokes about how many hours on a Cray it took to model the formation of a single vortex, let alone an airfoil, yet now, with lattice methods, we can model an entire airliner directly from Boltzmann’s gas laws. [link pdf NR] The developmental methodology is interesting here. Aerodynamicists have constructed an actual (analog) wind tunnel model based on a very typical jetliner of well-understood real-world configuration, and then made available a computer model of the aircraft which is used as the basis for evaluating CFD models and making modifications. These changes can then be validated in the wind tunnel before application to actual aircraft. (the fairing to control separation example in the referenced paper)

In a pure math model encompassing the range even of the Phillips machine would be constrained even beyond the agent/market limitations in DSGE models so I’m still betting on Farmer’s approach leading to useful new insights, as it closely follows the aerodynamic model (agent behavior(moleclules/housebuyers interacting with a structure (aircraft or transaction) all continually refined and checked against hopefully an agreed upon ‘windtunnel’ model of the economy, then real world cases.

Not everyone agrees that the Chinese Room experiment actually shows that you cannot reduce semantics to syntax...

I mean, that may or may not be true. It's just that it's not at all clear that Searle's experiment so conclusively demonstrates what some people think it does.

Greg:

You can quibble about the specific fusion power example if you want, but it's easily observable that the world today looks different from the worlds of 10, 20, 30, etc. years ago in ways that couldn't have been reasonably anticipated, and we should expect this to continue to be true. And you don't even know for sure what you said about fusion power; maybe it's possible to make something that could power a home on tapwater for $100, if only we knew how. We don't know all the rules of the game, and it's been a multi-century project to discover the ones we do know.

Regarding point four, the entire point of all this is to end up informing public policy. If we came up with a model like this that actually worked it would cause radical changes in public policy. A successful model cannot exist in a world like the one we live in because the very fact of the model's existence changes the world dramatically.

And point 4 hold very well in more limited cases too - look at how the stock market reacts dramtically to Fed announcements, for instance. If the Fed came up with a model like this and made it public, that would cause asset prices to behave differently from before (I believe I'm basically restating the Lucas critique, here).

I rather like that Mehrling paper on Fischer Black. Economics as looking at value as certain past costs and finance as uncertain future flows, however the only basis for estimating those flows are current and past experience, so while ex post they may be future flows, ex ante they are projections from past ones. It brings to mind we do not understand expectations or really anything about the future, they are just terms to hide our ignorance. We are in our own Chinese box.

Mandos:

Well, that's what the Chinese room shows according to Searle. Of course, not everyone agree and there is a huge literature about it (see for instance the entry in the Stanford Encyclopedia of Philosophy:

[link here NR]

Still, I must say that despite all the counterexamples and attacks made on Searle's argument, I think that it holds well. Anyway, my point is independent of the validity of the Chinese room argument. In my opinion, there isn't a fundamental difference between ABM and mathematical models. There is an interesting paper by Julian Reiss and Roman Frigg that provide a convincing defense of this claim:

[link here pdf NR]

Hey nick, I'm kind of showing my hand. But what I've been pursuing is process algebras, there Is a form of equivalence known as bisimulation. And there is the ability to prove stuff about systems. They've transferred it over into other parts of math too. Need to change one of the axioms of standard set theory. I don't have the papers on me cause I'm still in the bush. It is going by different names as well. Coinduction as a proof technique. [link here NR]

Edeast: I found your comment totally incomprehensible. Which probably says more about my math than it says about your comment ;-)

Ya, the godfather is Robin Milner and tony hoare, However one of Milner,s students luca cardelli has created some calculus and done the semantics for object oriented programming. Bisimulation translated into set theory is known is non well founded set theory. WHen translated to category theory it's known as a coalgebra. The pi calculus milner's, has encoded the lambda calculus, which is the traditional calculus for describing computational problems. The process algebras were created to deal with parallelism and many core computing, new problems are created not totally deterministic. However since the field has been established these formalisms have been used in other disciplines. Luca cardelli has been working on using it with biology. He uses process algebas to encode odes. Also cryptography uses the spi calculus. http://www.lucacardelli.name

Ill try to explain. They are used to model non deterministic sequential process. From the book I'm reading if two infinite objects or the black boxes in your example exhibit the same behavior they are bisimilar, and to prove they are equal is known as a proof by Coinduction.

http://en.wikipedia.org/wiki/Non-well-founded_set_theory

Here is a video of the ambient calculus. One of luca cardelli's. http://m.youtube.com/watch?v=j6bZCSw-rVA
I think the software used in the video has a default example modelling taxis in ny. I thought it would be good for trying to model the shadow bank runs, but have spent the last couple years just trying to figure out the theoretical basis. Don't ask me, I don't have it figured out yet.
I was also hoping to use this stuff to prove your theories that the medium of exchange was unique and different. Going to argue that the list of commodities formed a poset ranked by how many markets they traded in, or liquidity. So money would be the least upper bound or the supremum of the poset. Argue that it forms a chain and indeed a domain. Anyway that's where my comment on nontradeable zoblooms being semantic bottom or nonsense came from a couple of months ago. So the lub of each domain, has an undo effect on the aggregate, but then I got stuck, or that is as far as I've gotten. Coming from a comp sci approach.

Anyway my writing is unclear, partly to obfuscate my own confusion. I'll let you know if I figure anything out.

Edeast: well, I don't understand it (I thought "poset" was a typo, until I saw you repeat it), but you might be onto something. Sounds vaguely like Perry Mehrling on money?? Good luck with it!

You should be able to use partially ordered sets with the math you know. It's not necessary to use this other weird process calculi, in case you want to take a crack at it. I was following it into domain theory, which might not be the best place, and where I was getting hung up.

At first glance, I found the title of this post surprising. Isn't economics mainly about agent based models? (Not that they are all that easy to understand. ;))

In these models a typical agent is a rational utility maximizer. That description is ambiguous and in many economic situations is not enough to predict the agent's behavior. Furthermore, such an agent is complex. True, humans are complex. However, in science we prefer parsimonious explanations. In the service of both parsimony and precision, we may assume agents whose behavior is specified by certain simple rules. While we may think of such rules as rules of thumb that actual humans follow, their main scientific value lies in precision and parsimony. (To say that such assumptions do not reflect how humans actually behave, well, the same may be said of homo economicus. :)) Now, if agent based models use assumptions that are different from standard assumptions, that may make them unfamiliar and less easy to understand.

Computer simulations with agent based models are thought experiments. We use computers because they are thought experiments that humans cannot carry out, or carry out easily. As thought experiments, they are the domain of the theoritician. They tell us what the consequences are of our assumptions. Some of these consequences may be surprising, as they do not obviously follow from those assumptions. In fact, the ability to generate such surprises is one of the values of such simulations. Simple assumptions can lead to complex, interesting behavior, which humans alone could not deduce beforehand. The fact that computer thought experiments may produce surprising results means that humans may have to treat them like regular experiments and come up with human understandable explanations for the results.

Let me illustrate from my own experience. A while back I set myself a small project to evolve computer programs to play a game. I started with an established software package. The starting programs knew nothing, not even the rules of the game. I started the project running and went to the movies.

When I got back, the programs had evolved an unexpected strategy, which I called the Good Trick. To give you the flavor of the good trick, suppose that you have a game in which players compete where they must move about to try to reach a goal. The players, however, have poor eyesight. Here is a good trick. Dig holes for the other players to fall into. True, some players fall into their own holes, but they are familiar with where they dug the holes, so they are less likely to do so than other players. What may make this strategy surprising is that it has nothing to do with goal seeking, which is the object of the game.

In a way, I had a explanation for the Good Trick in my actual project. It set traps for ignorant, unperceptive players (which is the population that I started out with). But the players did not have a plan to trap other players, they just evolved to set the traps. As it turns out, the Good Trick is fairly robust. I have seen it emerge with other software that I have written, with software that others have written, even when the players have been fairly sophisticated. Really good programs do not use the Good Trick, because their comparably good opponents will not fall for it. The Good Trick is emergent behavior which is a function not only of the game and the players who use it, but of their opponents.

Now, the human explanation for the Good Trick is not difficult to find. But it did not come through understanding the workings of the player programs or the evolution software. Hundreds, even thousands of programs use the Good Trick. The Good Trick serves a purpose, but one that no program has. It is as though an Invisible Hand -- sorry. ;) Computer simulations may produce explanations, since the results follow logically from the assumptions. But they may not produce human understandable explanations, nor are they meant to do so. What they do is to show us the consequences of our assumptions, and they require us to specify those assumptions unambiguously.

----

Secondary question: Is the Good Trick rational?

Well, the players are hardly what we would call rational. They are dumber than amoebas. Furthermore, the players do not have a plan for the Good Trick. In a way, they are lucky that other players fall for the trap. But the social environment in which they find themselves is one in which the other players will fall for it. In short, in the environment in which they find themselves, the Good Trick helps them to win the game. It is therefore hard to call it irrational. "Rational" is an ambiguous term. :)

BTW, Keynes argued that probabilities are only partially ordered. That means that they are not numbers. Even if they all lie between 0 and 1.

IMO, human preferences are only partially ordered, as well. Among other things, that means that they are not transitive. Which is what research indicates. :)

Small comment on Searle's Chinese Room argument:

Consider the brain of a human who understands Chinese. It contains areas that are central to language understanding and production. Do any of the neurons in that brain understand Chinese? Don't be ridiculous. Yet a system within that brain, or possibly the brain as a whole, understands Chinese. That understanding is distributed throughout the system.

In Searle's Chinese Room no component of the system understands Chinese. Nor is it capable of doing what the brain of a person who understands Chinese does. Still, the mistake is to say that, because no component of the system understands Chinese, the system as a whole does not understand Chinese. (It's the homunculus problem. Searle gives it a twist by positing a homunculus who does not understand Chinese instead of positing one who does. ;))

Min, Scott Aaronson's Waterloo lectures describe quantum mechanics as probability with the complex numbers. Closest I've ever come to understanding it.

Thanks, edeast. :)

I expect that you are familiar with Feynman's book, "QED". Physical probability is a different beast from Bayesian probability.

No, I wasn't familiar, thanks.

Ok, I don't think economists are missing anything. They've translated it over to game theory, and game semantics solved an outstanding problem in domain theory, so you guys are fine.

This might fit in with your recent post on the very short run, but I think glynn winskel's http://www.cl.cam.ac.uk/~gw104/ recent research program has the most potential. He's tring to generalize over games. His lecture notes for the 2012 course on concurrent games, are good. At least the intro is good, gives the motivation. They describe his work on games as event structures but he mentions In a recent talk, games as factorization systems, and cause there is pictures, I like it. http://events.inf.ed.ac.uk/Milner2012/slides/Winskel/RobinMilner.pdf slide 12 and 22 till the end.

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad