My head is not clear on this, so this won't be a clear post.
1. Econometricians know of Ronald Fisher as a statistician, but he was also a biologist. I have not read Fisher, but this is how Jeremy Fox explained one of Fisher's ideas to me in comments on one of my old posts "Why the distribution of reality is skewed and so newspapers are biased towards bad news":
"Re: most genetic mutations being deleterious (i.e. fitness-reducing), yes, that is the case, both empirically and theoretically, although the precise probability that a mutation is beneficial depends on its effect size. Mutations of small effect are more likely to be beneficial. The theoretical argument, due to R. A. Fisher (The Genetical Theory of Natural Selection, 1930), is a geometric argument, and is very clever and elegant. Imagine that the phenotype z of an individual can be described as a point in a 2-D space (the argument generalizes to any number of dimensions). Now imagine that there's some optimal (fitness-maximizing) phenotype theta, which is some distance d away from the individual's phenotype. The smaller d is, the closer your phenotype is to the optimum, and so the higher your fitness. A mutation changes the phenotype, moving it some distance r in the 2-D phenotype space, in a random direction (a random direction because whether or not a mutation occurs has nothing to do with its fitness effect). You can think of r as the effect size of the mutation; a mutation of large effect is one producing a big phenotypic change (large r). A beneficial mutation is one that moves you from z to some point closer to the optimum phenotype, a deleterious mutation is one that moves you further away. It can easily be shown that, the smaller r is (i.e. the smaller the mutation's effect on phenotype, and thus on fitness), the more likely it is to be beneficial. In the limit of very small r, the probability that a mutation is beneficial approaches a maximum of 50% for a 2-D phenotype. Conversely, mutations of large effect are very unlikely to be beneficial, because even if they move the phenotype in the direction of the optimum, they can overshoot the optimum and so can leave the phenotype even further from the optimum than it was before. Small mutations eliminate the possibility of overshoots, which is why they're more likely to be beneficial. So yes, the majority of mutations will be deleterious, but that's especially so for mutations of large effect (and especially if your phenotype is already near-optimal). Rees Kassen at Ottawa is among those who've done some very clever experiments to confirm this and other more detailed predictions that can be derived from Fisher's model. It really works!
Here's a picture to go with the words..."
Or as Frances Woolley said: if we get big news about our car's performance, it's probably bad news.
An economy is a mixture of evolved systems (like Fisher's Darwinian process) and designed systems (like Frances' car). You get the same thing either way. The distribution of news is skewed, and most big news is bad news. Like a financial crisis.
2. Most macroeconomic models with microfoundations use the Calvo Phillips Curve, because it's easy to solve. What makes it easy to solve is that the firms that change prices in any period are a purely random and hence representative sample of all firms. So if inflation rises above target, that means the aggregate output gap must be positive; and if inflation falls below target, that means that the aggregate output gap must be negative. And that means that "Divine Coincidence" is true in these models; a central bank that minimises deviations of inflation from target also minimises the output gap at the same time. Inflation targeting is the best policy.
Ball and Mankiw (pdf) have a more complicated model where the firms that change prices are not a representative sample of all firms. The distribution of changes to equilibrium relative prices following a real shock is mean-zero by definition, but that distribution may be skewed across firms. And if there are small menu costs, only the firms that want the biggest changes in relative prices will actually change their prices. So Divine Coincidence fails, when there are real shocks. But it can fail in either direction. There can be bad shocks, where inflation will rise above target even if the output gap is zero; and there can be good shocks, where inflation will fall below target even if the output gap is zero.
3. I'm trying to put my 1 and 2 together to make this third point. But it's not quite that simple. Because "skewness" in my first point refers to the time-series distribution of shocks, and "skewness" in my second point refers to the cross-section distribution of shocks. But if the two sorts of skewness go together, we would get an economy where most big shocks were bad shocks, and would also cause inflation to rise above target even if the output gap were zero. "Price shocks" in the Phillips Curve would be upward-skewed. We would get something like Milton Friedman's "Plucking Model", where most of the time the economy is going along normally, but sometimes bad things happen, and inflation rises above target if the output gap stays at zero, or the output gap goes negative if inflation stays on target (recent Canada), or a mix of both (recent UK).
Simon Wren-Lewis notes the recent failure of Divine Coincidence, and shakes his head at the thought of how future economists will look back at our fixation with inflation targeting, like we now look back on the Gold Standard. Partly I think it's because of our human tendency to get obsessed by means even when they stop leading to the desired ends. And partly I think it's because we interpret the world through a microfoundational lens, as we must if we want to talk about what policies would lead to outcomes that people prefer. But if the microfoundational lens is wrong, we won't see things clearly.
So we should target NGDP, not inflation. It won't be perfect, but it should be better.
Hi Nick,
On the issue of selection effects of price changers Midrigan (https://files.nyu.edu/vm50/public/Virgiliu_Midrigan_files/multiproduct_menucosts.pdf) did a nice response paper to Golosov and Lucas (http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.498.5570&rep=rep1&type=pdf).
The idea behind GL was that subject to a monetary base increase (and a gaussian idiosyncratic shock), the sample of firms that want to adjust prices is made of firms that want to adjust the price by most. Therefore monetary policy has very short lived effects. The idea behind Midrigan, aside from other issues not directly related, is that (using the same sample as GL) the price changes that we observe are not consistent with GL. GL generates a bimodal distribution with most of the mass around the two modes, because firms that update price only want to either increase by "a lot" or decrease by "a lot". Midrigan points out that the changes in prices observed are more or less bell shaped. To generate this distribution he adopts a leptokurtic idiosyncratic shock, and the selection effect of price changers is very small then.
To put it up briefly, to replicate the data that we have on price changes the selection effect due to menu costs must be small, since in the data we do observe lots of small price changes. In this slides he compares the two models http://virgiliu.weebly.com/uploads/1/3/9/8/13982648/lecture_selection.pdf.
Posted by: Roger Gomis | February 10, 2015 at 02:20 AM
Roger: thanks for your comment, and for sending me those links.
On a very quick skim, the GL model looks similar to Caplin and Spulber, with real shocks added. And M is saying that most price changes are temporary sales, where the price returns to its previous level (which is a puzzle, of course, for all standard models).
I will reflect on these.
Posted by: Nick Rowe | February 10, 2015 at 09:00 AM
Hi Nick,
Yes, I didn't remember if it was in the article, but I looked it up now. M motivates the temporary price increases vs "permanent" at the end of page nine, and supports it with the following citation on page ten (from some other field study):
“... I was a territory manager so I had no pricing authority. The only authority I had was to
go to my boss and I would say, ‘OK., here is the problem I’ve got.’ He would say ‘Fill out a request
and we will lower the price for that account.’ So this is how the pricing negotiations went. At that
time I went up the chain to make any kind of adjustments I had to make... My five guys have a
certain level [of discount] they can go to without calling me. When they get to the certain point
they have to get my approval....”
Posted by: Roger Gomis | February 10, 2015 at 10:14 AM
Nick,
It has been 6 years out of the biology game for me, so I hope I am not to out of the loop, but to expand on Jeremy Fox's explanation- the majority of mutations are actually neutral/benign (occurring in non coding sections or mutations that didn't alter functionality or phenotype), and that big mutations become more valuable (relatively) when there is a change in equilibrium (for the same reasons as listed above a change in equilibrium is much more likely to move optimal fitness farther from current fitness than closer). This is why there still are big mutations but at a much lower rate and why they haven't been all but eliminated from the pool. The optimal mutation strategy for something that lays a million eggs will (pulling the percentages out of me bum) be something like 80% have no or neutral mutation, 19% small mutations and 1% big mutations.
One lesson you could draw from it is that if you eliminate or limit the big mutation number the organism/system will become more fragile not more robust, as its ability to react to equilibrium changes is greatly diminished.
Posted by: baconbacon | February 10, 2015 at 12:40 PM
Fisher's geometric model and the Calvo Phillips curve...Never change, Nick. ;-)
Posted by: Jeremy Fox | February 10, 2015 at 02:25 PM
Jeremy: you found it! Your comment had stuck in my mind. The Fisher thing is so brilliant.
bacon: interesting. Another idea for me to reflect on.
Posted by: Nick Rowe | February 10, 2015 at 02:38 PM