« What determines long run private debt/GDP ratios? | Main | Why Ontario Will Not Be Balancing the Budget Anytime Soon »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

I think that it isn't the Simon Wren-Lewis version of rational expectations that bothers people, but rather many of the stronger forms and claims. He says, for instance:

"New Keynesian theories based on rational expectations are more compelling, and can include the fact that information is both costly and incomplete"

You could even stretch this to include Kahneman if you put a utility cost of time and effort on hard thinking.

This discussion often gets mixed up with criticisms of single representative agent models, though, of course it shouldn't.

It surprised me how many different statements of rational expectations are in common use by economists. Since they are nonequivalent, they can't all be right.

Hmm, if I read this correctly then in the end maybe we really need some Chuck Norris that will have to punch people in several concrete steppes in order for them to start taking signaling seriously.

Peter N: "This discussion often gets mixed up with criticisms of single representative agent models, though, of course it shouldn't."

Agreed. But you can see how the two get confused. The single agent knows what he is planning to do, and knows what he expects to happen. In a representative agent model, it is as if he knows what everyone is planning to do, and knows what everyone expects to happen!

On this topic, have you seen

Macroeconomic Analysis without the Rational Expectations Hypothesis

http://www.columbia.edu/~mw2230/AREcon.pdf

Not exactly light reading I'm afraid.

I don't understand why we want to be in a rational expectations world. That might be preferable during a great moderation but doesn't rational expectations lead to a higher sacrifice ratio? For example, isn't wage indexation the height of rational expectations?

JV: Yep. That's something I can't make up my mind about. Sometimes beliefs change quickly, and sometimes they don't. Obviously it will depend a lot on the central bank's communication strategy. If it says nothing, it will take a long time for beliefs to change.

Peter N: I hadn't read that paper. Yep, looks tough going. None of those expectations mechanisms look very "naive" though, and it's in the context of a model where the equilibrium is highly dependent on expectations.

Sam: because (normally) we don't want people to be making mistakes. Wage indexation is more of a substitute for rational expectations. If you can't figure out what will happen, you index.

As much as I'd like to agree I think it's pretty clear that in steady state (even bombarded by random shocks) there isn't much difference between rational and naive expectations. But that skirts the substance of the complaint against rational expectations. It's when the economy is (has to?) switch between steady states; the Gold Standard world, the 2% inflation target world and the random walk world, that the difference between rational and naive expectations comes into play.

Also, reading the typical complaints, it often appears that the context of the criticism is different than the context of the mainstream discussions. When discussing rational expectations mainstream macroeconomists usually talk about expectations of inflation. When complaining about rational expectations the critics are often talking about expectations of... investment (of other firms or aggregate) or maybe future output (or some kind of "ability to compute the equilibrium"). Animal spirits (not just the effect of inflation on interest rates). In other words, usually, implicitly or explicitly, it's some kind of "give us our accelerator model back!"

notsneaky:

Your first paragraph: But we DO agree! That's what I'M saying! "It's when the world changes, so the naive expectations that were rational in the old world aren't rational in the new world, that things get tricky."

Second paragraph: I didn't say anything about that, but we agree there also. I was thinking about adding a bit like this:

We also want to KISS, because normal people aren't super-smart. Should we make it simple for them to form rational expectations of inflation? (Inflation targeting does this). Or simple to form rational expectations of the price level? (Price level targeting does this.) Or simple to form rational expectations of nominal income? (NGDP targeting does this.). Or something else? Which one matters more?

And when I say the IS curve is upward-sloping, I'm re-claiming the accelerator model!

Yes. I read the Wren-Lewis post, the Dillow post and then this post and they sort of meshed in my mind and I sort of lost track who I was responding to (my apologies). I do think that this - the tricky part - is worth emphasizing more.

A few issues-

1) Does rational expectations imply the efficient market hypothesis and vice-versa? This only arguably possible if you are using compatible definitions, and only some of them. This can muddy the waters considerably.

2) If you use a definition of rational expectations which involves a mean 0 error term, what is the distribution of the error term? If it is long tailed things can get interesting, particularly if the distribution is fractal and has approximately infinite variance (which is a reasonable assumption for actual market price time series).

3) In a multiagent model, is it necessary that an agent's expectations be always rational across both agents and time steps?

4) What happens if you use a definition involving information cost. Then, with multiple agents you have to consider the distribution of these costs across agents and time steps. Can you end up with Posner's rational irrationality argument, for instance?

5) In a cost model can you have agents with both different information costs and different economic resources and can the two be correlated?

6) What does and ought your saying you believe in rational expectations say about your beliefs about the behavior of the economy and in what circumstances?

7) What does rational expectations imply concerning risk? It would seem rational to minimize risk dis-utility. Consider, for instance diversification and varying risk tolerance.


notsneaky: Yep. But it's hard to know what to say about that tricky part. I think that the people who look at learning are doing useful work. But then sometimes, simple communication of the central bank's target can matter more than learning. For example, the Bank of Canada switched to inflation targeting partly because businesses said they couldn't figure out how to negotiate wage contracts because they couldn't figure out what inflation rate the BoC was aiming for, so could the Bank please tell them what it was trying to do?

And when the Bank did announce its new inflation targeting policy, inflation did come down very fast (faster even than the Bank was aiming for). But, on the other hand, it did seem to take a long time for nominal interest rates to come down and reflect the Bank's new target. So, in that case, the empirical evidence of the transition to rational expectations seems to be a bit mixed. It seemed to be partly very quick, and partly very slow.

Peter N: My guesses:

1. EMH implies RE, but not vice versa.

2. Depends on the probability distribution of the variable in question. Think about the rational expectation of the size of earthquakes tomorrow, for example. Mean near zero, but with a very long tail out one end.

3. Usually yes. It *might* not matter much in a near linear macro model.

4. Not familiar with Posner's rational irrationality argument. But if it's what I guess it is (the game-theoretic case for pre-commitment) it all depends on whether other agents know you are irrational. (It can be rational to let other agents see that you are not looking at the information.)

5. Yes. But the correlation could (in principle) go either way. See 4 above.

6. Not a lot. Because it depends on so many other things too.

7. Nothing. One is about the curvature of the utility function; the other is about beliefs. (But it is irrational to spend resources to get information which you can't act on, even if it does reduce risk.)

Nick, believe it or not I consider myself a conservative-which is the reason I hate the modern Republican party. Seems to me you're saying a number of things, but you might seem to be saying that adaptive expectations aren't so unreasonable at least in many of the economic decisions we have to make about the future. Wren-Lewis made it sound like living by adaptive expectations means your 'stupid.' Yet why should this be? While we can't predict the future the idea that it will be quite similar to the present is not so unreasonable

RE is just one of these ideas that intuitively don't make any sense to laymen like myself. Yet it's obvious that academic economists swear by it and can't imagine giving it up. The reason for this I'd be very interested to know. WL's seems to be that it's the best thing they have for economic models and that anything else would not work so well.

Nick I just wrote a post for which your post was a major catalyst for.

http://diaryofarepublicanhater.blogspot.com/w2013/11/simon-wren-lewis-makes-case-for.html

I think you put the problem well: it's hard figuring out whether new ideas are true or not.

Mike Sax,

"believe it or not I consider myself a conservative"

Clicking your name and looking at your blog these days, I'm not surprised. ;)

Mike: my guess is that Simon Wren-Lewis would not disagree with what I have said here.

This is the formula for Adaptive Expectations:

Xe(t)-Xe(t-1) = B[X(t-1)-Xe(t-1)]

Intuitively, if X came in higher than you expected last period, you adjust your expectation up by B times the difference.

There are many worlds in which adaptive expectations would be rational. BUT, the key insight of Rational Expectations was that, even in those worlds, the parameter B will not be a constant. B will depend on how X(t) actually varies in the world you live in. For example, if X(t) is a random walk, a rational person would have B=1. And if X(t) is white noise, a rational person will have B=0. And so on. That was an important insight.

I'm (just) old enough to remember Adaptive Expectations in the days before Rational Expectations. And we really did assume people were totally stupid; we assumed (without really realising this) that people could be surprised on the upside again and again and again, while never adjusting their rule of thumb to stop making those obvious repeated mistakes. Some economists do go a bit overboard with RE, in ignoring the problems people face in figuring out the world has changed, especially if the world is complicated. But we were way worse in the olden days.

"Red sky at night, shepherd's delight" is a really naive way of forecasting the weather. But it will work quite well, and be very rational, if that's all the information you've got. IF the prevailing winds are from the West. But it would be really stupid for an economist to assume people use the same way of forecasting the weather if they live in a world where the prevailing winds are from the East.

There is an aggregation fallacy going on here. Say we don't know which algorithms people use, because the people themselves don't know. There are a variety of different algorithms used by different people.

Nevertheless, those who are right in one period get more wealth and therefore have a bigger say in the market expectation (which is an average of individual expectations, weighted by wealth). That will tend to create a bias so that whatever algorithms worked in the past will be applied with greater force in the future, as those who apply those algorithms will have more wealth and therefore a greater weight on the aggregate expectations.

This can be well modeled by adaptive expectations in a way that rational expectations, which crucially relies on everyone having the same algorithm, which is also the correct one. Rational expectations is not robust to heterogeneity in a way that adaptive expectations is -- call it stupidity of the crowds.

rsj: I was with you up until the word "create" in this sentence:

"That will tend to create a bias so that whatever algorithms worked in the past will be applied with greater force in the future, as those who apply those algorithms will have more wealth and therefore a greater weight on the aggregate expectations."

Shouldn't you replace "create" with "reduce"?

You seem to be talking about Evolutionary Stable Strategies, where those whose algorithms work a little better become wealthier, and a bigger part of the wealth-weighted average (and are also copied by those with less successful algorithms).

Nick,

No, assume that there is a 50% chance of rain each period (which no one knows for certain). Half the population thinks that there is a 40% chance of rain, and half thinks that there is a 60% chance of rain, so in aggregate we have rational expectations. Here, we are weighting opinions by market wealth of the person holding the opinion. Each population makes their investments appropriately. Next period, it rains, so those who are biased towards rain become wealthier. Now, more than half the population, weighted by wealth, is biased towards rain. Therefore even though the original aggregate population had a rational expectation, it now has a biased expectation. And this is a general phenomena -- e.g. aggregate rational expectations are not stable under heterogeneity of beliefs.

In fact, the evolution of such a system can be modeled as the aggregate holding adaptive expectations.

On the other hand, adaptive expectations *are* stable under heterogeneity of beliefs. If half the population is likely to increase their probability of rain by 1% after it rains last period, and half the population is likely to increase their probability of rain by 2% next period should it rain, then the aggregate of these can be modeled as another adaptive expectation.

Therefore the economic modeler should use adaptive expectations if they are intending to model a group of people and are not relying on everyone having identical beliefs.

rsj: and if a fair coin just happens to come up heads several times in a row, people who don't know it's fair may think it's a bent coin that has a higher than 50% probability of heads.

Nick, I don't see the relevance.

I was making the point that ratex at the individual level requires everyone to have identical beliefs. If you relax that, then can you get differing beliefs (all of which are wrong, but in different ways) that somehow aggregate to rational expectations at the aggregate level? And the answer yes, but this isn't stable -- you wont remain in such a state. Adaptive expectations, however, are stable. You can have every individual employ an adaptive strategy and the aggregate of different such strategies will be an adaptive strategy.

If you are going to model many people as just one person using a prediction strategy bases on it plausibility for use by individuals, then that strategy should be stable under aggregation.

rsj: in your example, you assumed it rained (in a world with a 50% probability of rain) and that this caused the aggregate to shift more towards those who believed the probability of rain was 60%. Assume two periods, where it rains in one period and doesn't rain in the other, (so the actual frequencies match the assumed probabilities) and you don't get your result.

In my example, the actual frequencies also don't match the assumed probabilities. I'm saying your example depends on that, rather than on heterogeneity of beliefs. And that this is a real problem with rational expectations, because it might take a very long run of data to get the observed frequencies to match the assumed probabilities. So learning rational expectations might take a long time. (The "Peso Problem", where there is a small probability of a big change, is an example of this.)

OK, let's run this for 2 periods. Whoever guesses right has their wealth increased by 50%, and whoever guesses wrong has their wealth decreased by 50% -- you can assume that they spend 50% of their income on planting a crop that doesn't yield a harvest if it rains, for example.

Two people start with $1. A believes it will always rain and B believes it will be always be dry (to make the math simpler).

There are four possible outcomes:

R - R
A now has $2.25, B has $0.25, probability of rain next period = 90%

D - R and D - R
A now has $0.75, B has $0.75, probability of rain next period = 50%

D - D
B has $0.25 and A has $2.25, probability of rain next period = 10%

So the proportion of states after 2 runs in which we the aggregate expectation is rational is 50%, and in 50% we are in an irrational expectation state.

But after 4 runs, we are in a rational expectation state only 6/16 = 37.5% of the time, and after 6 runs we are in a rational expectation state only 20/64 or 31% of the time, and after 2N runs we are in a rational expectation state (2n, n)/2^n times (which tends to zero)

Outside of those times, if after 2N runs, it has rained one or more time than it was dry, then we will be biased towards rain, otherwise we will be biased towards it being dry.

So we will almost always be biased one way or the other and will almost never be in a rational expectation state, even though throughout all runs, ½ the people believe it will rain and ½ the people believe it will be dry.And this bias has memory, in the sense that if we are currently biased towards dry, then odds are that next period we will remain so biased.

Notice that *no one* in our economy is using either bayesian reasoning or adaptive expectations, but the aggregate outcome sure looks like these, right? So the strategy at the individual level is irrelevant to the consideration of the one modeling aggregate behavior. For aggregate behavior, you might as well use adaptive expectations irrespective of whether you think this assumes stupidity at the individual level. Either that or assume everyone has identical beliefs.

rsj: if people always bet half their wealth on the flip of a fair coin, you will always end up with one person owning all the wealth, in the limit (unless two people always make identical bets). That is true regardless of whether their beliefs are rational or irrational.

Information only has positive value if you change your actions conditional on that information and increase your expected utility/wealth/evolutionary success as a result. In your world, what would an agent who knew the true probability of rain do differently? Does information have positive value in your world?

Nick, the same principle holds whether people incorporate information or not. I just made a simple example.

What matter is that you have a number of different strategies (e.g. beliefs). When one works out, those employing that strategy become overweight in the market, so that the aggregate expectation is biased by that particular strategy. But if the "true" strategy is some weighted combination of the strategies employed by each individual, then the market as a whole will not reflect the true strategy, it will (almost) always be overweight or underweight the true strategy.

I provided a numerical example of this over-weighting, not a numerical example of a "good" strategy, as the strategy itself is irrelevant to the aggregation bias.

rsj: your initial population has two strategies, both based on equally and symmetrically false beliefs, so which one eventually wins depends on the sheer luck of the draw. Maybe those who have slightly less false beliefs might tend to do better? And if we allow the full spectrum of beliefs in the initial population, might those whose beliefs just happen to be correct tend to do best of all?

Nick, no, the assumption here is that there are heterogenous beliefs. This means that each individual cannot have rational expectations, and therefore everyone is wrong -- but the hope is that the truth somehow comes out from the aggregate.

Yes, if one set of beliefs strictly dominates another, so that regardless of what happens, one side is more right than another, then the latter belief will disappear. That's obvious. We ignore the situation of one person having the right belief coming to dominate everyone else, since that degenerates into the situation of uniform beliefs (as market weighting of everyone else will be zero).

Therefore after culling those strategies dominated by others, since everyone is still wrong, you are left with inconsistent beliefs in which one does not dominate over the other. That is the example I gave.

Then you can ask whether the process of market-weighting causes the truth of both beliefs to come out, and the answer is no, it merely biases one belief set over the other and vice-versa.

So it is very unlikely that the aggregate population displays anything like rational expectations unless there is perfect uniformity of beliefs.

What is also interesting is that even if everyone in the economy is only forward looking, the process of market weighting will make the aggregate expectation appear to be backward looking or bayesian, even if there is no basis in this at the individual level. We don't need to ask whether people are backwarding looking, since the economy as a whole will be backward looking merely by shifting wealth from one set of strategies to another.

Nick is more correct here. Some technologies turn growth sharply negative, as sharply negative as hyperinflation happens. There isn't any time for an evolutionary algorithm to work. Need deduction, not induction. In the future, we can use a set of risky technologies to cap the risks of other technologies. In a few decades, the odds of intractable hyperinflation (really WMDs or post WWIII tyranny) go way up, maybe from 0.2%/yr now, towards say 2%/yr by 2175. In a few decades, we have the power to cap the WMDs, albeit at the cost of at best temporarily increasing the tyranny/WWIII odds: Great Depressions every few years or Soviet Communism. Transitioning is a single event; no time for learning after the fact. Where economics comes into play is these technologies are standard industries right now: materials science, semiconductors, software expert systems, biotechnology...the CIA/FBI will not move to classify them, will act inductively. What economics needs to do is quantify the information for the Agencies to act deductively. The scenario is transparent enough to me to be able to *game* inductively.

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad