« The Keynesian case for government wage cuts | Main | The MX6 bitch bolt and technical progress »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

"As we reduce f below 1, so there are some rational agents, who adjust prices instantly, they will anticipate falling inflation, so the average expected real interest rate will rise even more, and inflation will fall even more quickly. In the limit, as f approaches 1, a 1% rise in the nominal interest rate will cause instant explosive deflation.

How that model behaves in the limit, as f approaches 0, is diametrically opposed to how the model behaves at the limit, when f=0."

First of all there's a typo, in the first paragraph I've reproduced here, you apparently meant to say "In the limit, as f approaches [0,] a 1% rise in the nominal..."

Now, let's take an example motivated by the US. There are a finite, but large, number of agents, say 300 million. At f=0, we have all the money neutrality and changes in the nominal rate don't change the real rate. All good so far.

Now, make f = 1/300,000,000. We've made a single agent have a sluggish, adaptive response. You state that this means that "a 1% rise in the nominal interest rate will cause instant explosive deflation."

Do you really believe that is a property of the model? I can tell you that it isn't.

Perhaps the discreteness is the problem, if we allowed f < 1/300,000,000 could we find such an f? I can tell you that you can't.

Thus, on the face of it the your assertion is flat out false.

Now, you might say that there is some f > 0, (say f=1/300) at which it would be true that "a 1% rise in the nominal interest rate will cause instant explosive deflation.
". I'd be curious if you can somehow characterize such a (set of) critical point(s)?

However, if that critical point is strictly greater than zero then the what you said in the post is still flat out false.

Thanks Adam. Typo fixed.

I'm not following the rest of your argument. Maybe because it's *very* late here. So I think I had better go to bed, and try again in the morning!

To be clear, my claim above is that the response of inflation to a 1% rise in the nominal rate is uniformly bounded on the closed interval [0, f].

That is what makes your claim false (it is also different from what I said above, my bad).

correction:

To be clear, my claim above is that the response of inflation to a 1% rise in the nominal rate is uniformly bounded on the closed interval [0, 1/300,000,000].

That is what makes your claim false (it is also different from what I said above, my bad).

Nick,

My point simply is this, the response of inflation to a 1% rise in the nominal interest rate is uniformly bounded for f in the closed interval [0, 1/300,000,000]. This means that the function that maps the value of f to the response of inflation, keeping the rest of the model unchanged, is uniformly bounded on that interval.

This implies that the limit as f goes to zero (from above) of that function is also bounded, thus it is *not* true that "In the limit, as f approaches 0, a 1% rise in the nominal interest rate will cause instant explosive deflation."

I'm taking the quoted statement to mean the following:

For every M > 0 there exists some d>0 such that if [f < d] then [the response of inflation to a 1% rise in the nominal rate will be less than -M].

Thus, no matter how big M is, if you choose f small enough (but still strictly positive) then you get a deflation rate greater than M in response to the 1% rise in the nominal rate.

Actually, if you work through the math you will find that the truth is exactly the opposite of your assertion.

The limit as f tends to 0 of the response of inflation to a 1% rise in the nominal rate is zero, thus the model is robust in your sense of the word and your assertion is, as I've stated, flat out false.

Sorry, yet another correction.

The limit as f tends to 0 of the reponse of the real rate to a 1% rise in the nominal rate is zero, the limit of the inflation response is 1%.

Anyway...

Actually your model isn't too far from the truth. One small shift in any system can quickly cause meltdown if all the subsequent decisions are poor. Case in point - the USA and even world economy today.

Adam's sequel supports my position. But I think Nick has some Howitt-reasoning in mind: agent cannot learn REE preditions of models, if the central bank is assumed to fix interest rates. Thus, Kocherlakota's model is globally unstable. I will answer that as soon as I can. However, note that Howitt's reasoning has not much to do with your notion of "fragile in the limit".

Typo-warning: I wrote 'globally unstable'; I meant 'dynamically unstable'

Further, let me use the chance to improve upon myself: Instead of 'agent cannot learn REE preditions of models' read 'agents cannot learn REE, if the central bank is assumed to fix interest rates'.

Sorry.

Hmm, but this cuts both ways.

We know that central banks cannot control the money supply because their job is, first and foremost, to ensure that banks remain liquid. This mission pre-dates and is much more crucial than any macro-economic management duty. It is why central banks exist.

I think the Fed grants about 50 billion of intra-day credit every day to banks that are short of cash. If they were to refuse to do this and let the banks declare bankruptcy, then the entire banking system would collapse, and the CB would not live up to its core operational objective, which is to support a flexible money supply.

So in this sense, we can say that the monetarist model of having the CB control the quantity of money, rather than the price of money, is not robust at the limit -- because even a single refusal to create more money in support the payment system would completely disrupt the economy. If the government controls the quantity of money, rather than setting costs for borrowing money, then there can't be a payment system to begin with.


Rather than look at the reasoning of the f>0 example, I want to go back to the first "perfect" model:

"The model predicts that if the central bank increases the nominal interest rate by 1%, the rate of inflation will instantly increase by 1% too"

Without knowing the workings of the model intimately, my sense is that this can only happen because of an instantaneous fall in prices. Looking at it another way, if the central bank increases nominal interest rates, the money supply effectively falls (indeed this may be its mechanism for making interest rates increase). If the money supply falls, and money is neutral, prices must instantly fall.

So the inflation happens, but from a lower price level.

If this is correct, then the "limit" behaviour in your imperfect model is exactly the same - the "explosive deflation" of an instant reduction in the price level. So maybe the model isn't so fragile after all.

The assumption being threatened and ultimately broken is that the Fisher relation and monetary super-neutrality in the long run apply to the same currency. There is no argument that this must be the case in the derivation of either. In the limit of inflation spiraling into the heavens, people use a different currency, or barter. And the Fisher relation, and monetary super-neutrality, then work just fine in both regimes. The real interest rate for the old currency is correctly predicted to fall without limit, because nobody wants it.

That aside, though, the point about model robustness makes sense. One might rigorously ask whether the equilibrium is stable under small perturbations, I guess.

I agree with those arguing that the way perfectly flexible prices interact with an increase in the target for the nominal rate above the natural rate is that the price level instantly falls to a sufficient low level to that its future rate of increase (back to some expected future level) when added to the natural rate equals the target rate.

It seems to me that the problem with this approach is to determine what is the expected future price level? If the expected future price level changes with the the current price level, then we have the explosive (or implosive?) deflation.

If we assume the modern macro long run government budget constraint and monetarist (not interest rate targeting) price level determination, then this episode might impact the future price level, but it should still be determined and so, we can get the current one to fall relative to it, so that its expected rate of increase equals the difference between the natural rate and the target rate.

Of course, I might be wrong. I sure didn't understand AdamP's remarks about the math. Is my analysis consistent with this math? Or should we take AdamP's remark to mean that the increase in the target rate does immediatle cause people to start raising prices faster so that the real market rate stays the same?

And if the analysis of the instant drop in the price level is right, why didn't AdamP say so, rather than tell us about mapping functions?

Could anyone think it is irrelevant whether prices start rising from their current value or else fall immediate and then start rising?

amv writes, "This mission [support for illiquid banks] pre-dates and is much more crucial than any macro-economic management duty. It is why central banks exist."

Yes to the first statement; emphatic "no!" to the second. Central banks exist mainly because they serve the fiscal interests of governments or those of certain private-banking industry insiders or both. Bagehot, for one, understood this. His argument for having established central banks like the Bank of England serve as lenders of last resort was not a positive argument for central banking. It was an argument about how to minimize the adverse effects of what Bagehot considered to be an inherently flawed arrangement compared to the "natural" system of free banking. Although it is true that the establishment of some later central banks was rationalized by appeals to the need for a LOLR, this rationalization was a perversion of Bagehot's argument.

George: that was rsj who wrote that, not amv. On this blog, the name is below the comment!

My apologies to both amv and rjs for the error.

Bill, I didn't bring up the mapping function, Nick did.

Thanks for the comments everyone. I'm going to be a bit slow responding. I was feeling very rough this morning. *Not* hungover, but it felt like it. (Gas fumes from working on the MX6, or eye strain from new glasses and sitting in front of the computer too long?) Anyway. I may take a bit of a break. Plus, I want to *try* to work on some math.

But, there were really 2 points to this post.

1. Introduce the concept of "fragile in the limit", and assert that models which have that property are bad models.

2. Assert that a particular model has that property.

Everybody (I think) has commented on 2 rather than 1.

What do you think of 1? Is it new? Useful? Right?

(I published a model once that was fragile in the limit. As a particular parameter varied, it went: classical, classical, classical,... then when that parameter took on a very particular value, it went extreme keynesian, then classical, classical, etc. After thinking about it for a long time, I decided I didn't like my model after all. Which was a real shame. Because otherwise, I really liked the model.)

OK, I'll comment on point 1 - the existence of "fragility in the limit".

This is the same as the mathematical concept of continuity. A function a(x) is continuous at a point c if the limit a(x: x->c) exists, and is the same as a(c).

In this case, your condition of "fragility" means that the limit of inflation as f->0 is not the same as inflation AT f=0.

Technically you are not talking about a function being continuous, but the behaviour of a model being "continuous" as one of its parameters tends towards zero. However, it's easier to write the definition in terms of the behaviour of the dependent variables in the model (e.g. inflation, prices).

We're getting a bit meta, because we are not talking about whether the individual functions in the model are continuous (e.g. is inflation a continuous function of interest rates); we're talking about the behaviour of model variables as you vary the basic parameters of the model. But these can still be represented as functions - e.g. holding nominal interest rates constant at 1%, how does inflation vary as a function of the proportion of f, the fraction of prices that are sticky.

I think that this particular function _is_ continuous in that variable, but that doesn't invalidate the concept of fragility - it is still a meaningful question to ask.

To expand slightly on this: most classical economic equilibrium models are continuous in most variables - this makes them much easier to work with for the modeller, and for rational agents it is usually a natural assumption.

Non-rational or behavioural models (I'd include sticky-price or sticky-wage models in this) may well have discontinuous features. For instance, the very fact that wages are sticky means that a change in wages of 0 is special, and would not be expected to have similar effects to a change of 0.01%.

What you're talking about here is a shift from a rational model to an only-slightly-irrational model by a change of parameters; my intuition is that this change should be continuous and not fragile, but I could be wrong.

Nick, I will comment on 1. I have two comments:
1) It is not a new concept but you defined it backwards (well you went back and forth on this). You say:

Some models are "robust", and others are "fragile". If a large change in the model's assumptions causes only a small change in the model's predictions, a model is "robust". If a small change in the model's assumptions causes a large change in the model's predictions, a model is "fragile".

The usual definition of robustness in equilibrium analysis is the opposite : a model is robust if a SMALL change in parameters leads to a SMALL change in the outcome (in the sens of an arbitrary small open set around the original outcome). I think the rest of your discussion fits this definition.

2) the idea of fragility in the limit as you state it can be confusing. You may confound the question of robustness with the question : Is one model nesting the other in the limit? You might think it naturally should and thus if it is not, there is a robustness problem. But then again, maybe there is no way one model could nest the other and this is not something you should expect.

Further to Youcef M's comment, you may find a discontinuity to be an inconvenient property in some particular model, but it is over-reaching to claim that it is universally bad. Some quite famous models have this property. For instance, the limiting value for the speed of a particle is c, but things that actually have speed c are qualitatively different from those that are only "close": they have the same speed in all inertial reference frames.

With regards to finding a pithier name for you "fragile in the limit" phrase, why not just go further and call it a 'frail model'? Fragile suggests "breakable but useful with care" while frail will "break with no warning during regular use", which sounds pretty close to what you're claiming about the Fisher equation. I have nothing to contribute to the actual debate though.

Youcef: Yes. Good! "Nesting" is related to "fragility in the limit". Obvious,....now you mention it! I could have explained what I meant more easily if I had used that concept. Like this:

Any model can be seen as nested within another wider model. As the parameters in that wider model approach the parameter values assumed in the narrow model, do the predictions of that wider model approach the narrow model's predictions in the limit? If "no", then the narrow model is "fragile in the limit". (God that's an ugly term!)

Leigh: you understand me correctly. Good. I must be comprehensible.

Michael: "frail". Could be. "Eggshell"?

Nick, I'll comment on 1 as well. This property has played a rather important role in growth models, particularly in the 90s. Charles Jones (currently at Stanford) made his name in the field in no small part due to criticizing the endogenous growth models of Paul Romer (among others) for essentially this reason. Several key parameters of the model were assumed to take specific values; however, changing these parameters by an arbitrarily small amount led to qualitatively different implications (although still quite interesting). Jones referred to the breakdown of the model if parameters don't take very precise values as "knife-edge conditions," which I take to be essentially the problem of a model which is "frail in the limit."

As an example, one of the most critical of these conditions in growth modeling is population growth. If population growth is *exactly* zero, many mathematical tricks helpful in reproducing stable long-run growth rates become possible. The tricks don't work if population is allowed to change.

Norman: yes. "knife edge" captures the idea I have in mind. I didn't know that the same issue had come up in endogenous growth theory. But presumably they came to the same judgement I did: any model that is so "knife edge" so that a tiny change in one parameter/assumption leads to massively different predictions, is not a good model.

Adam: In my post I don't have any math. But I do have a verbal argument to support my assertion. What is wrong with my verbal argument? Let me try to spell it out more clearly:

Assume initially f=1. We are in something like a New Keynesian model, except without RE. There's an IS equation, and a Calvo-style Phillips Curve.

1. If the bank raises i by 1%, everybody thinks that r has gone up by 1%, so the inflation rate starts slowly falling too.

Now change 1 firm so that it adjusts its price continuously and has RE. What will that firm do?

2. Since it knows that 1 is true, it knows that r has increased by more than 1%. And it will be reducing its inflation rate more quickly than the other firms. So inflation falls just slightly faster than in 1.

3. Now change a second firm. Since it knows 2 is true, it will cut inflation slightly more quickly, as will the first firm, so average inflation will fall slightly faster than in 2

4. And so on.

As more and more firms are allowed to have rational expectations and to adjust prices continuously, the average expected real interest rate increases more and more in response to the 1% rise in the nominal rate, and inflation falls more and more quickly.

It's a sort of inductive "proof".

If you think it's wrong, there must be some magic number for f at which switching one more firm from adaptive to rational expectations and Calvo to continuous price adjustment causes inflation to fall less quickly.

I just don't see it.

amv: I wish I could remember Peter Howitt's papers more clearly. I thought I was arguing along vaguely similar lines, if not exactly the same.

Phil: if we knew for sure that f=0, then this wouldn't be a problem. But we know for sure that f won't be exactly 0. The only question is whether f=0 is a good enough approximation that the predictions will be roughly right.

I don't know about physics, but all economic models are false. But the good ones make predictions that are only a little bit false.

A sketch of a math model:

1. Phillips curve+IS: inflation = -B.Real interest Rate + average expected inflation.

Where the parameter B is decreasing in f, approaches infinity as f approaches 0, and approaches some finite number as f approaches 1. Maybe B=1/f would do fine?

"Average" expected inflation means the weighted average of the rational and the non-rational expectations:

2. Average expected inflation = f(last period's inflation) + (1-f)(rational expectation of inflation)

3. Fisher Identity. Real interest rate = nominal interest rate minus average expected inflation.

Let's see:

If f=1, the model says

4. inflation = - nominal interest rate + 2(last period's inflation)

which is what we want. With Calvo prices and adaptive expectations, an increase in the nominal interest rate causes inflation to fall, and keep on falling.

If f=0, B=infinity, so equation 1 bolts down the real interest rate (to 0). And average expected inflation equals actual inflation, so the Fisher equation gives us

5. inflation = nominal interest rate

which is also what we want.

So, we just need to solve for inflation in the general case, where f is between 0 and 1, and take the limit for that solution as f approaches 0. I don't think it will approach equation 5.

Any halfway competent person ought to be able to solve it.

My guess is that the solution will look *something* like:

6(?) inflation = - (nominal interest rate) + (2/f)(last period's inflation)

And 6. does not approach 5. in the limit as f approaches 0.

Actually, my new guess is *something* like:

6'(?) inflation = - (1/f)(nominal interest rate) + (2/f)(last period's inflation)

Any vaguely competent mathematicians out there who can get it right?

Nick

here the link to Howitt's paper: http://www1.fee.uva.nl/cendef/upload/64/Howitt1992.pdf

I will post something on it. For the moment note that I chose a unit-of-account model to circumvent the Howitt-results.

Thanks amv. Yes, he is doing learning over time. So that is quite a bit different from what I'm doing here. But suppose, just suppose, that agents "learn" by suddenly seeing the true light of rational expectations. If they "learned" one at a time, that would be my model, as f approaches 0. Peter's agents learn by observing the past experience. Mine "learn" by divine inspiration.

Look forward to seeing your post. (But we can't comment on your blog? Or did I miss seeing how to comment?)

Nick,

to have an active comment section turned out to be extremely time-consuming; we are Ph.D students and all busy to finish our theses (one has already finished), so we decided to keep the comment section deactivated ... at least until we are finished. But if you send me an e-mail and I'd love to post it.

Actually, the more I think about Kocherlakota's statement, the more interesting it becomes from a purely theoretic point of view ... and the less relevant for actual monetary policy. So I'm willing to concede this. I'm still thinking about how to make sense of the dynamics of the unit-of-account version; after all, if there is an authority that determines inflation by re-fixing the relative price of the unit-of-account, central banking can be thought of as a sequence of currency reforms (monetary policy makes no sense anyway in that framework). I just like to make up my mind on outlandish theory.

Here is my try on Equation 6:

inflation = 1/(f^2+f-1)*{-(nominal interest rate)+(f^2+f)*(last period's inflation)}

If f=0, this becomes: inflation=nominal interest rate (Equation 5).
If f=1, this becomes: inflation=-nominal interest rate + 2*(last period's inflation) (Equation 4).

Unfortunately, if f=(sqrt(5)-1)/2(i.e., about 0.618), it explodes.

Nick, does this paragraph from Wicksell's Interest and Prices capture what you are saying?

"It is possible in this way to picture a steady, and more or less uniform, rise in all wages, rents, and prices (as expressed in money). But once the entrepreneurs begin to rely upon this process continuing—as soon, that is to say, as they start reckoning on a future rise in prices—the actual rise will become more and more rapid. In the extreme case in which the expected rise in prices is each time fully discounted, the annual rise in prices will be indefinitely great."

Laidler points out that in this paragraph Wicksell is flirting with rational expectations.

Perhaps I am missing the point here, but I don't see how you get infinite _prices_ in finite time. I do see how you can (in the ideal model) get infinite inflation.

Here is what my horse sense says:

Suppose we are in a pure credit economy -- a la Wicksell. The only asset is land, which is in fixed supply. Take your favorite production function -- e.g. farming -- so that

pF(Land, Labor) = w*Labor + y q*Land

q = price of land
p = price of good
y = yield, or return on land.

Even though the quantity of land is fixed, it is held by many small farms, so each individual farm can increase their land holdings by purchasing land from some other farm. The price of land will then be the opportunity cost of parting with it, which in nominal terms is set by arbitrage with the price of the corresponding income stream (y) as set by the central bank.

Therefore q goes up and y falls when the borrowing rate is cut.

When q goes up, everyone in the economy is "wealthier", in the sense that they have more real savings (qL/p) than they wish to hold, and simultaneous to that, as y/p is falling, they will want to consume more in the present.

So they sell off some of their land to buy consumption. As people can buy land on credit, the nominal price of land is pegged to the bond rate and this process will result in an increase in the price of food rather than a decrease in the price of land. The price of land is fixed by arbitrage.

The price of food rises until the the real value of savings = q/p_new is where it is demanded.

Now if we make the adjustment process sluggish, this will create shortages of food, and actual inflation will be *lower* than the ideal (infinite) inflation (as prices are sluggish). You can think of land here as money, but its quantity remains fixed when the nominal rate is too low. In the limit as prices become less sluggish, inflation goes to infinity, but that is just because our (one time) price adjustment is approximating a step function. Prices do not go to infinity.

In a pure credit economy, when everyone is allowed to borrow from the central bank, the net supply of money is always zero. Just because the bank is creating more deposits in response to borrower's demands, it is also creating more debt obligations, so unless you start to bring in distributional issues, the increase in deposits doesn't affect net savings and doesn't cause people to want to consume more. What causes them to want to consume more is their increased real wealth -- qL/p, and this leads to a finite price adjustment.

If we stretch that adjustment out over time, so it takes, say 5 years, then we can use whatever heuristic algorithm we want. In some sense, this is better than assuming a step function for prices. Nevertheless, the step function of prices can be thought of as a limit of smooth functions, so the model is robust.

Does this make sense?

Actually, solution of Equation 6 can be generalized for B=B(f):

inflation = 1/{1-(1+B)(1-f)}*{-B*(nominal interest rate)+(1+B)*f*(last period's inflation)}

If B=1/f, this results in the solution of my previous comment.
If f=0, this results in Equation 5 (i.e., inflation=nominal interest rate) regardless of B.
If f=1, inflation is the weighted average of nominal interest rate and last period's inflation, where weights are -B(1):1+B(1).


But I think if you change Equation 1 as:
inflation = -B.Nominal interest Rate + average expected inflation.
things become much simpler.

In this case,
inflation = -B/f*(nominal interest rate)+(last period's inflation)

If f=0, nominal interest rate should be zero, regardless of B. (Friedman Rule!)
If f=1, nominal interest rate = f/B*{(last period's inflation)-(inflation)} (Taylor Rule!...or, something similar to it)

JP: that paragraph from Wicksell is perfect! That's exactly what I'm saying. "In the extreme case in which the expected rise in prices is each time fully discounted...." is so close to "In the limit, as we approach rational expectations..."

(Oh, God, a century later and we are still going over the same ground! Wicksell knew this.)

himaginary: Thanks for doing this!

I still can't quite get my head around reconciling the math with my intuition.

1. Your simpler assumptions, where B is a fixed parameter independent of f, is giving you a result which exactly lines up with my intuition.

If we assume "inflation = -B.Nominal interest Rate + average expected inflation"

Then inflation is determined by "inflation = -B/f*(nominal interest rate)+(last period's inflation)"

Yes! That's what my intuition (and Wicksell's intuition) says should happen. In the limit, as f approaches 0, so we approach everybody having RE, then a 1% rise in the nominal interest rate will cause infinite, and accelerating, deflation. So the full RE model is "fragile in the limit". It's a knife-edge result. That's exactly what I wanted to prove.

2. But in the more general case, where B is a decreasing function of f (like B=1/f) your math is giving different answers to my intuition. A 1% rise in nominal interest rates causes explosive deflation when f approaches 0.618??? That's weird. I thought we would get the same *general* results in that case too. Is it an artefact of discrete time modelling??? Dunno.

Clearly, I need to think about this some more.

Thanks again for your help!

rsj: "Even though the quantity of land is fixed, it is held by many small farms, so each individual farm can increase their land holdings by purchasing land from some other farm. The price of land will then be the opportunity cost of parting with it, which in nominal terms is set by arbitrage with the price of the corresponding income stream (y) as set by the central bank."

That's where you are losing me. For a given total stock of land, and given preferences for saving, and tachnology F(), and labour supply, the yield on land (which is like a real interest rate in your model) should be determined at some equilibrium value y*. Wicksell would call that the "natural rate of interest". The equilibrium ratios (q/p)* and (w/p)* (natural levels for real land value and real wages) should also be determined. If the central bank sets a y different from y*, the system is not in equilibrium, and cannot get to equilibrium with all markets clearing.

rsj: put it another way. If the bank sets y below y*, the desired stock of land will always exceed the actual stock.

Nick, y is not a real rate, it's a nominal rate which must equal the CB rate by arbitrage.

If the bank lowers y, then real wealth increases -- q moves up enough so that y falls to the new level. This has nothing to do with preferences, it is just arbitrage.

As q goes up, Households do not have too little wealth, they have too much real wealth ( = qL/p), so they bid up the price of consumption goods, p.

In a flex-price model this happens instantly -- the moment the rate cut is announced -- but in a sticky price model this happens over time.

The flex price model is the limit of the sticky price model.

And in terms of the desired stock of land -- households don't care (and I would argue, they don't even *know*) how much land they hold. They only care about their real wealth, qL/p. But q is fixed by arbitrage, so households are really only free to adjust p. Of course, the mechanism for adjusting p is to increase their bids for consumption, based on the wealth consumption trade-off. I.e. think, in terms of value functions,

u'(C*) = V'(w) = V'(qL/p*).

As L is fixed, and q is fixed, you can think of this as a demand curve for C as a function of p. if C is too low, so that u'(C) > V'(qL/p), then households will want to consume more and they will increase their bids for consumption goods, causing p to increase until u'(C) = V'(qL/p). The flex-price versus sticky price is more about the speed of convergence rather than the direction of price changes -- in the flex price, the right p just appears and in the sticky price, there will be excess demands for C as p slowly climbs up to the right value.

On the other hand, it is _not true_ in general that the deterministic value function is a limit of the stochastic value functions. In the stochastic income case, households act as if they were credit constrained, whereas in the perfect foresight case they do not. The resulting value and consumption functions are bounded away from each other, and only approach each other in the limit as wealth --> infinity. The stochastic V does not converge to the perfect foresight V as variance-> 0.

"Your simpler assumptions, where B is a fixed parameter independent of f"

No, I didn't assume B is a fixed parameter independent of f. Instead, I replaced *Real* interest Rate of your Equation 1 with *Nominal* interest Rate. (I should have been more articulate about that.)

So, if you accept that replacement, the equation "inflation = -B/f*(nominal interest rate)+(last period's inflation)" holds even if B is dependent on f.

himaginary: Aha! That explains it. Because I was getting different results after I made my above comments. Thanks. I'm still working on (thinking about) the maths.

Nick: I like this "fragility" thing and it brings to mind something Diamond says about his original search paper in his Nobel lecture. He has bunches of sellers with identical constant cost of producing an identical good. With bunches of consumers and no search costs, we have price equal to cost. But add just a tiny search cost and the only equilibrium has each charging the monopoly price. (If all are charging anything less than that, each could gain by raising its price by (just less than) the search cost. So does this mean that the no search cost model is fragile in your sense, since as we lower search costs towards zero, we don't approach P=MC more closely?

kevin: neat. Yes, is the short answer.

Here's a longer answer: but Diamond's result is itself fragile, I think, because if we had a distribution of search costs across consumers, I think his result wouldn't hold. Hmmmm. This leads me to an awkward conclusion that a model might be fragile in the limit, but only in a very particular direction in parameter-space. It might be robust in all other dimensions.

A pin pointing up is fragile in the limit in all directions in parameter space. (Change *any* parameter, northerly or easterly, and you fall off). A knife edge pointing up is fragile except in one direction. But you might have a smooth rounded hill, with a tiny crack leading away from the summit in one direction. It is robust in all directions except one.

Kevin: Let me change my short answer. I misread your question.

What Diamond's model tells us is that the P=MC result of perfect competition is fragile in the limit! Just add a tiny search cost, and the prediction flips from P=MC to P a lot bigger than MC.

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad