I'm hoping some game theorists will chime in here; it doesn't matter if you don't get macro. I need your help, and want your thoughts on my intuitions:
Not all Nash equilibria are created equal.
Game A. There are n identical players who move simultaneously. Player i chooses Si to minimise a loss function Li = (Si-Sbar)2 + (Si-Sbar)(Sbar-S*), where Sbar is defined as the mean Si over all players, and S* is a parameter that is common knowledge to all players.
This game has a unique Nash equilibrium Si = Sbar = S*.
Game B is exactly the same as game A, except the loss function is now Li = (Si-Sbar)2 + (Si-Sbar)(S*-Sbar). (I flipped the sign of the last bracketed term.)
This game also has a unique Nash equilibrium Si = Sbar = S*.
I think the Nash equilibrium in game A is plausible, but the Nash equilibrium in game B is implausible.
To help you understand why I think that, let's make an apparently trivial change to both games. Let Si be bounded from above and below, so Sl <= Si <= Su, where Sl < S* < Su.
Game A still has the unique Nash equilibrium Si = Sbar = S*.
Game B now has three Nash equilibria: the original Si = Sbar = S*; the lower bound Si = Sbar = Sl; and the upper bound Si = Sbar = Su. And I find the second and third equilibria equally plausible, and the first (interior) equilibrium very implausible.
If n is large, so that dSbar/dSi = 1/n approaches zero (individuals ignore the effect of their own choice on the average choice), the reaction functions for the two games are:
Game A: Si = S* + 0.5(Sbar-S*)
Game B: Si = S* + 1.5(Sbar-S*)
[Did I get the math right?]
The reaction functions for the two games look like this:
For game A, the green reaction function crosses the black 45 degree line (for a symmetric Nash equilibrium) only once, at S*.
For game B, the red reaction function crosses the black 45 degree line three times: at S*; at the lower bound Sl; and at the upper bound Su. That's why we get three Nash Equilibria.
I think the interior equilibrium in game B is much less plausible than the two degenerate equilibria at the upper and lower bounds.
[Update: I would not board a ferry if I knew that, if the ferry leaned starboard/port, each passenger on the ferry would want to be further to starboard/port than the average passenger.]
If this were a repeated game, I could talk about learning. It would be hard for players to learn the interior equilibrium in game B, because any mistakes they make in predicting what other players do will tend to be self-reinforcing. For example, if all players expect Sbar to be S* + epsilon, the actual Sbar will be S* + 1.5epsilon.
But even in a one-shot game I do not think the interior equilibrium in game B is plausible.
I was trying to figure out what would happen if the n players moved sequentially (observing previous players' moves before making their own move), and each player had a small probability of making a totally random move anywhere between Sl and Su (trembling hand). The math was too hard for me, but I think it would make the interior equilibrium in game B very improbable, if n was large. Am I right?
(The two games here are highly stylised versions of a New Keynesian macroeconomic model. The players are price setting firms, Si is the amount by which firm i chooses to raise its price, so Sbar is the economy-wide inflation rate. Assume for simplicity the natural real rate of interest is 0%.
In game B, S* is the nominal interest rate set by the central bank.
In game A, S* is the central bank's inflation target, and there is a prior stage in the game where the central bank announces that it will set the nominal interest rate in accordance with the Howitt/Taylor principle, after observing Sbar.
The upper and lower bounds on inflation represent the idea that if inflation gets too high or too low the central bank will change the game, by adopting QE, for example.
Strictly speaking, I should add a third term like (S*-Sbar - m)2 to the loss function, where m is the degree of monopoly power, to represent the losses from monopoly power in a New Keynesian model. But I ignored it for simplicity, since it doesn't affect my point here.)
Playing 2 is the weakly dominant strategy for both, no?
Posted by: notsneaky | September 20, 2015 at 04:42 PM
I hesitate to try a table in the comments, but here goes. The payoff for each player. The opponent plays (0, 1, 2).
Player's play\Payoff
0 ( 0, 0, -2)
1 (-1, 0, -1)
2 (-2, 0, 0)
Note that this game is an example of Game B, with the payoffs doubled. :)
Posted by: Min | September 20, 2015 at 07:49 PM
Nick,
Usually, we use equilibrium to indicate what the model predicts will happen, but it's important to note that this is not the definition of equilibrium. In some cases, the distinction is not that important but in others it is. The equilibrium in the prisoner's dilemma is highly predictive. In Min's game above, the various equilibria tell us little about what will actually happen. There are other hairs to split here too. In a long-run growth model, technically the model doesn't predict that the steady-state will ever "happen" exactly (depending on the details of the model) unless you happen to start there. Yet we still basically treat it as what the model predicts because it predicts that we would *approach* it.
Regarding your question about preventing jumps from one equilibrium to another, I don't think there is any general answer. There is nothing inherent in the definition of NE that deals with jumping from one to another. Like I said, the definition just amounts to a state in which nothing is expected to push it away. If you are talking about what "pushes" something to one equilibrium or another, you are going beyond the concept of equilibrium. This is why it's important to keep the proper definition of equilibrium in mind. BTW, there is nothing wrong with talking about what pushes something to one equilibrium or another. The competitive market equilibrium is just a NE to a simultaneous-move game but it makes for a compelling *predictive* model in part because we can tell a story about what would happen if the price were higher or lower than the equilibrium price and that story "pushes" the price toward the equilibrium. If, on the other hand, our story seemed to push the price away from that equilibrium, it would be a much less convincing prediction but it would still be an equilibrium.
Posted by: Mike Freimuth | September 21, 2015 at 04:38 PM
Mike: " In a long-run growth model, technically the model doesn't predict that the steady-state will ever "happen" exactly (depending on the details of the model) unless you happen to start there."
Agreed. In a long run growth model, for example, there is a ("short run") equilibrium path predicted by the model, and there may (or may not) be a "long run" steady state equilibrium, independent of initial conditions, that may (or may not) be stable.
Posted by: Nick Rowe | September 21, 2015 at 04:47 PM