I'm hoping some game theorists will chime in here; it doesn't matter if you don't get macro. I need your help, and want your thoughts on my intuitions:
Not all Nash equilibria are created equal.
Game A. There are n identical players who move simultaneously. Player i chooses Si to minimise a loss function Li = (Si-Sbar)2 + (Si-Sbar)(Sbar-S*), where Sbar is defined as the mean Si over all players, and S* is a parameter that is common knowledge to all players.
This game has a unique Nash equilibrium Si = Sbar = S*.
Game B is exactly the same as game A, except the loss function is now Li = (Si-Sbar)2 + (Si-Sbar)(S*-Sbar). (I flipped the sign of the last bracketed term.)
This game also has a unique Nash equilibrium Si = Sbar = S*.
I think the Nash equilibrium in game A is plausible, but the Nash equilibrium in game B is implausible.
To help you understand why I think that, let's make an apparently trivial change to both games. Let Si be bounded from above and below, so Sl <= Si <= Su, where Sl < S* < Su.
Game A still has the unique Nash equilibrium Si = Sbar = S*.
Game B now has three Nash equilibria: the original Si = Sbar = S*; the lower bound Si = Sbar = Sl; and the upper bound Si = Sbar = Su. And I find the second and third equilibria equally plausible, and the first (interior) equilibrium very implausible.
If n is large, so that dSbar/dSi = 1/n approaches zero (individuals ignore the effect of their own choice on the average choice), the reaction functions for the two games are:
Game A: Si = S* + 0.5(Sbar-S*)
Game B: Si = S* + 1.5(Sbar-S*)
[Did I get the math right?]
The reaction functions for the two games look like this:
For game A, the green reaction function crosses the black 45 degree line (for a symmetric Nash equilibrium) only once, at S*.
For game B, the red reaction function crosses the black 45 degree line three times: at S*; at the lower bound Sl; and at the upper bound Su. That's why we get three Nash Equilibria.
I think the interior equilibrium in game B is much less plausible than the two degenerate equilibria at the upper and lower bounds.
[Update: I would not board a ferry if I knew that, if the ferry leaned starboard/port, each passenger on the ferry would want to be further to starboard/port than the average passenger.]
If this were a repeated game, I could talk about learning. It would be hard for players to learn the interior equilibrium in game B, because any mistakes they make in predicting what other players do will tend to be self-reinforcing. For example, if all players expect Sbar to be S* + epsilon, the actual Sbar will be S* + 1.5epsilon.
But even in a one-shot game I do not think the interior equilibrium in game B is plausible.
I was trying to figure out what would happen if the n players moved sequentially (observing previous players' moves before making their own move), and each player had a small probability of making a totally random move anywhere between Sl and Su (trembling hand). The math was too hard for me, but I think it would make the interior equilibrium in game B very improbable, if n was large. Am I right?
(The two games here are highly stylised versions of a New Keynesian macroeconomic model. The players are price setting firms, Si is the amount by which firm i chooses to raise its price, so Sbar is the economy-wide inflation rate. Assume for simplicity the natural real rate of interest is 0%.
In game B, S* is the nominal interest rate set by the central bank.
In game A, S* is the central bank's inflation target, and there is a prior stage in the game where the central bank announces that it will set the nominal interest rate in accordance with the Howitt/Taylor principle, after observing Sbar.
The upper and lower bounds on inflation represent the idea that if inflation gets too high or too low the central bank will change the game, by adopting QE, for example.
Strictly speaking, I should add a third term like (S*-Sbar - m)2 to the loss function, where m is the degree of monopoly power, to represent the losses from monopoly power in a New Keynesian model. But I ignored it for simplicity, since it doesn't affect my point here.)
Think of the equilibrium-plus-one extension of the game, where one single player gets to "peek" at everyone else's move before deciding theirs. In this case, Sbar and S* are both observed quantities, so the loss function becomes:
(Si-Sbar)^2 + (Si-Sbar)*k
where k is Sbar-S*.
The loss function is most negative (representing a gain) when Si-Sbar = -k/2, or Si = -k/2+Sbar = 0.5*(Sbar+S*)
That is, the last-mover wants to pick Si as the average of the target and everyone else's choice. This is even equivalent to the "guess 2/3 of the average" game, where observed results approach two to three "iterations" of rationality.
Now, if k was S*-Sbar, the optimum point for our last-mover is instead Si = 1.5*Sbar - 0.5*Si = 0.5*(Sbar+Si) + (Sbar - Si) -- the last-mover wants to be further from the target than the average, and it wants to be (Sbar-Si) away from the good-loss-function case.
Taking these results, we can extend them to a fully iterative game. The (Sbar-S*) case results in a stable convergence to the Nash equilibrium, whereas (S*-Sbar) results in a divergence away from it if Si is not bounded.
Posted by: Majromax | September 03, 2015 at 10:43 AM
Majro: thanks for your comment. I don't fully understand it yet (maybe just me), but it seems to be along the right lines.
(I need to set interest rates in 30 minutes, and prepare for a canoe trip, so I may not respond to comments immediately.)
Posted by: Nick Rowe | September 03, 2015 at 10:54 AM
I think the concept you're looking for is "Trembling hand perfect equilibrium", and related concepts. Basically, this is a Nash Equilibrium that is robust to vanishingly small random deviations.
An analogy I like is a boulder perched at the top of a hill vs. in a valley. Both are equilibria, but the former is not robust to small perturbations, because then the boulder rolls down the hill.
Posted by: jonathan | September 03, 2015 at 11:14 AM
jonathan: I think you are right. I have a very rough understanding of trembling hand perfection, but can't figure out if the interior equilibrium in game B is trembling hand perfect (with sequential moves?). I think it isn't, but can't prove it.
Posted by: Nick Rowe | September 03, 2015 at 11:22 AM
Nick: Actually, I looked at the payoff functions you wrote down, and I think you meant to write something else.
For example, for Game A, Li = (Si-Sbar)^2 + (Si-Sbar)(Sbar-S*), and so *any* choice of S is a nash equilibrium as long as everyone coordinates on it, because both terms will be 0 as long as Si = Sbar, which will always be true when Si = Sj for all i,j.
Posted by: jonathan | September 03, 2015 at 11:49 AM
Wait, never mind. I was thinking that the loss function took a minimum value of 0, but in fact it can go negative (I was thrown off by its being a quadratic-looking loss function).
Posted by: jonathan | September 03, 2015 at 11:53 AM
jonathan: phew! You are right to check my math; I so often get it wrong.
Posted by: Nick Rowe | September 03, 2015 at 12:08 PM
@Nick Rowe:
> Majro: thanks for your comment. I don't fully understand it yet (maybe just me), but it seems to be along the right lines.
Think of your game with a Calvo Fairy. The firm that gets the fairy's touch sets its price this round, while all others can't.
For the single price-setting firm, both S* and Sbar are observables. Version #1 would have the firm set its price such that Sbar moves (epsilon) towards S*, Version #2 would have the firm set its price such that Sbar moves away from S*.
Posted by: Majromax | September 03, 2015 at 12:32 PM
Majro: " Version #1 [A] would have the firm set its price such that Sbar moves (epsilon) towards S*, Version #2 [B] would have the firm set its price such that Sbar moves away from S*."
But Sbar is the average not just of past prices, but of future prices too, so it's not quite that simple, unless we assume purely backward-looking expectations in the sequential-moves version of the game.
Posted by: Nick Rowe | September 03, 2015 at 01:45 PM
In the A game the equilibrium is stable: Suppose sbar > s*. Best response si is below the 45 degree line. Everybody plays it. So sbar moves closer to s*. New best response is still below 45 degree line. So sbar moves closer to s*. Etc., and same for sbar < s*
In the B game the middle equilibrium is unstable. Suppose sbar > s*. Best response si is above the 45 degree line. Everybody plays it. So sbar moves further away from s* towards that high value si. Etc. If sbar < s*, si moves closer to the low value si. Unstable.
What matters is not that there's a lower and upper bound but the fact that in game A the reaction function is flatter than 45 degree line and in game B it's steeper.
Unstable equilibria are implausible.
(I only looked at the graph, didn't do the math)
Posted by: notsneaky | September 04, 2015 at 11:26 AM
@Nick Rowe:
> But Sbar is the average not just of past prices, but of future prices too, so it's not quite that simple, unless we assume purely backward-looking expectations in the sequential-moves version of the game.
It still works with forward-looking expectations.
With backwards-looking expectations, you gain (negative loss) if you play a move that is between the target and the average. Adding in forward-looking expectations, if you expect other players to be rational that increases your weighting of the target, making you move the average towards the target even more strongly.
As another illustration, consider a "hyper conformist" version of this game where everyone's price is initially Pbar, but they many only move by plus or minus ε, not freely over the entire range. With the conventional loss function, Pbar > P* the sole Nash equilibrium is for everyone to price at Pbar-ε. With the unconventional loss function, the Nash equilibrium is for everyone to price at Pbar+ε.
Posted by: Majromax | September 04, 2015 at 02:54 PM
"Game A. There are n identical players who move simultaneously. Player i chooses Si to minimise a loss function Li = (Si-Sbar)2 + (Si-Sbar)(Sbar-S*), where Sbar is defined as the mean Si over all players, and S* is a parameter that is common knowledge to all players."
OK. Li = (Si-Sbar)2 + (Si-Sbar)(Sbar-S*)
= (Si-Sbar)(Si-S*) ≧ 0
OC, if Si = S* then Li = 0.
If Si > S* then Si ≥ Sbar, and if Si < S* then Si ≤ Sbar. That can only be true for each i if Si = Sbar or Sbar = S*. In the first case, Li = 0, and in the second case, Li = (Si-S*)2. Neither case can be assumed, I think.
"Game B is exactly the same as game A, except the loss function is now Li = (Si-Sbar)2 + (Si-Sbar)(S*-Sbar)."
OK. Li = (Si-Sbar)2 + (Si-Sbar)(S*-Sbar) ≥ 0
∂Li/∂Si = 2Si - 3Sbar + S* = 0, approximately, with n large.
Then Si = (3Sbar-S*)/2 and
Si-Sbar = (Sbar-S*)/2 and
S*-Sbar = -2(Si-Sbar) and so
Li = (Si-Sbar)2 - 2(Si-Sbar)(Si-Sbar) = -(Si-Sbar)2 ≥ 0
which is true only if Si = Sbar. That is, the only way Player i can set Si = (3Sbar-S*)/2 is when Si = Sbar and Si = S*. Again, I doubt if we can assume those conditions.
OC, none of this holds if we are talking about payoffs instead of loss functions.
Posted by: Min | September 04, 2015 at 04:05 PM
if the n->large and the game is symmetric then
A: the cost function is (at the limit) a parabola with a min at S*,
B: the cost function is (at the limit) a hyperbola with a MAX at S*
if [Sl,Su] is not symmetric around S* then the cost function has a min at that end of the interval further away from S*
if the interval is symmetric then the cost function has a min (the same) at both ends
It seems that at the non-symmetric *interval* case what matters is the degree of asymmetry. If the short end is on the Sh side then I think there are values that Sh can take that will make the other end either less desirable or more desirable. On the symmetric *interval* case on the other hand I can't seem to find a reason for either end to be more desirable.
making the game sequential shouldn't change something since everybody is insignificant and does the same thing
now if n is not large enough and/or everybody's different I don't know, it seems too messy
Posted by: john | September 05, 2015 at 02:59 AM
"B: the cost function is (at the limit) a hyperbola with a MAX at S*"
should read
"B: the cost function is (at the limit) a hyperbola with a SADDLE POINT at S*"
sorry
Posted by: john | September 05, 2015 at 03:09 AM
notsneaky: I edited your comment to put a space either side of the < . Otherwise, Typepad freaks out, because it thinks there's a link coming.
"Unstable equilibria are implausible."
I tend to agree. But some economists would reject that use of the word "unstable". I am trying to restate that objection in another way.
Majro: "It still works with forward-looking expectations."
Aha. Interesting, if we can prove it. (My brain is still out canoeing, so I can't wrap my head around the rest of your comment.)
Min: you lost me, sorry.
john: If the n players colluded (minimised their joint losses), they would choose S* in game A, and either Sl or Su in game B. But we are assuming there is no collusion. Each player chooses Si to minimise his own losses taking other players' Sj's as given. (I'm not sure whether or not you are assuming that.)
Posted by: Nick Rowe | September 07, 2015 at 08:40 AM
Nick - yeah, but that's on them. Do you have specific example in mind?
I think the point is also that the existence of the lower and upper bound doesn't really matter; it's not what makes the B interior equilibrium implausible. It's the slope of the reaction function that makes it implausible (if you had the bounds in A economy nothing would change)
One thing I'm a little hung up on though is that you want these guys to move sequentially and you want n large, which means something like "n goes to infinity". But wouldn't that mean that effectively every players just gets to move once (have to wait until infinity to make another move)?
I guess you could assume a continuum of agents and the moves would be sort of like a seconds-hand on a clock "sweeping" the circle. If that was in "model time" (within each period) then it'd be the same as the static game. If that was actual time... that'd be hard to analyze (I think).
Posted by: notsneaky | September 07, 2015 at 01:34 PM
Nick,
Should you not be using an absolute value function?
Game A:
Li = ABS ((Si-Sbar)^2 + (Si-Sbar)(Sbar-S*)) = ABS ((Si - Sbar)(Si - S*))
Game B:
Li = ABS ((Si-Sbar)^2 + (Si-Sbar)(S* - Sbar)) = ABS ((Si - Sbar)(Si + S* - 2 Sbar))
Under game A, the minimum loss occurs when player i sets Si = Sbar or S*.
Under game B, the minimum loss occurs when player i sets Si = Sbar or 2 Sbar - S*.
Under game A, a player knowing S* and without knowing Sbar, would chose to set his Si = S*. Under game B, a player knowing S* and without knowing Sbar, might set his Si = some multiple of S*:
Si = k x S*
Rewriting the loss function for game B:
Li = ABS ((k x S* - Sbar)(k x S* + S* - 2 Sbar))
Setting Li = 0
0 = ABS ((k x S* - Sbar)(k x S* + S* - 2 Sbar))
Since a player does not know what Sbar is, he would pick a k that minimizes both ABS(k x S* - Sbar) and ABS((k + 1) x S* - 2 Sbar).
0 = ABS (k x S* - Sbar) = ABS(k x S* + S* - 2 Sbar)
Sbar = k x S* = ( k x S* + S* ) / 2
k = 1
And so, even under game B
Si = 1 * S* = S*
If we are not using the absolute value function, the we need to use negative infinity as our minimum value for Li.
Posted by: Frank Restly | September 07, 2015 at 07:18 PM
notsneaky:
Well, I do have specific neo-fisherian examples in mind, but, this point goes beyond neo-fisherians. Many economists will say: "Well, if the model says S* is the equilibrium, then the model says S* will happen. Period. And "unstable" simply means that S*(t) moves over time, and does not return to its original position following a shock."
"I think the point is also that the existence of the lower and upper bound doesn't really matter; it's not what makes the B interior equilibrium implausible. It's the slope of the reaction function that makes it implausible (if you had the bounds in A economy nothing would change)"
I tend to agree. Introducing the bounds was partly a rhetorical device. But partly to answer the question: "OK, so you don't think S* is what will happen in game B. So, what *will* happen??"
"I guess you could assume a continuum of agents and the moves would be sort of like a seconds-hand on a clock "sweeping" the circle. If that was in "model time" (within each period) then it'd be the same as the static game."
Not necessarily the same as the static game. Sometimes, who moves first matters, because you can observe the others' moves before making your own, so you can make your move contingent on theirs.
I want to write a blog post with the title: "Nash equilibria that are 'unstable' (in the old-fashioned sense) are not trembling hand perfect". If I could prove it. That would resurrect and validate that old-fashioned sense of "unstable".
Posted by: Nick Rowe | September 07, 2015 at 09:55 PM
You could do something like this with this game:
Suppose there's a (countably) infinite number of players where each players is indexed by the time period in which they make their move. This is sort of "small n" for initial players but "large n" for later players set up. My understanding of what you're trying to do is that a player's choice of s(t) (since we're doing time I'm switching subscripts from i to t) affects sbar contemporaneously. So sbar=(s(t)+Q(t-1))/(t+1), where Q(t-1) is the sum of all previous choices up to time t. There's no "taking average as given" fudging here, so players understand that by choosing s(t) they're also affecting sbar. We start at t=0 and assume that s(0)=s(1)=s* (we have two give it two time periods because the n=2 case is weird).
Let r(t) denote the actual outcome of choice s(t), where r(t)=s(t)+e(t) for t>1, e(t) is white noise. Since each player moves only once they do the best they can in that period so we don't have to worry about future forecasts and all that.
*algebra*
In Game A the optimal choice of s(t) is s(t)=((t+1)*Q(t-1)+t*(t+1)*sbar)/(2*t^2+2*t)
In Game B the optimal choice of s(t) is s(t)=((3*t-1)*Q(t-1)-t*(t+1)*sbar)/(2*t^2-2*t)
*more algebra*
In Game B we have
s(t)-s*=((3t-1)/(2t-2))*(Q(t)-s*)
That coefficient is greater than 1. So s(t)-s* diverges from Q(t)-s* and s(t) explodes unless Q(t)=s* for all t. (remember Q(t) is the average so far)
Alternatively we could compute r(t)-s* explicitly. It's basically r(t)-s*=e(t)+a(t-1)*e(t-1)+a(t-2)*e(t-2)+... . The a(t)'s are time dependent coefficients, which are really tedious to compute (ratios of time polynomials). It's a moving average. We can use that to compute variance and autocovariances and check if its stationary. Strangely enough the variance is actually bounded (I think). But the process is non-stationary so any mistake gets build into subsequent choices and never dies out. s(t) goes to +/- infinity.
(You can also just simulate this in a spreadsheet with little trouble)
Here's the somewhat weird part and it actually occurs in Game A version. That process is stationary and the effect of mistakes die out but ... s(t) does not converge to s* but rather to 1/2 s*. Maybe my algebra was wrong but you can see it above:
s(t)=((t+1)*Q(t-1)+t*(t+1)*sbar)/(2*t^2+2*t)
In "long run" Q(t)=sbar, so
s(t)=((t+1)^2)/(2*t^2+2*t)s* --> (1/2)s*
The mistakes don't die out fast enough, stay in the process long enough so that you never get to the optimum.
This'd be clearer if one could write math in a blog comment.
Posted by: notsneaky | September 07, 2015 at 11:21 PM
I'm also pretty sure that at the last ASSA I saw somebody doing comparative statics on a model with an unstable equilibrium and obtaining "counter intuitive results"
Posted by: notsneaky | September 07, 2015 at 11:58 PM
Notsneaky,
I think the reason you are seeing the Si = (1/2) x S* term is that choosing that value can generate a negative Li value (negative loss = positive gain?). If Si = 1/2 x S* then:
Li = -1/4 x S*^2 + 1/2 x S* x Sbar
If S* >> Sbar, then the -1/4 x S*^2 becomes the dominant term. Suppose S* is 1000 and Sbar is currently 100. i starts at 1.
L1 = -1/4 x 1 million + 1/2 x 1000 x 100 = -200,000
That is the maximum negative loss (positive gain) that can be obtained. If instead you use the absolute value:
Li = ABS ((Si - Sbar)(Si - S*))
Then it becomes clear that a player should set Si = S* to minimize Li.
Posted by: Frank Restly | September 08, 2015 at 12:16 AM
I don't think that's it though I'd have to think through it. Rather it's that si=s* is only optimal if sbar=s*. This is always true in the static version. But in the dynamic version because people make mistakes (the errors) these mistakes, which die out but not fast enough, accumulate and keep sbar away from s*.
Posted by: notsneaky | September 08, 2015 at 12:27 AM
Notsneaky,
I don't think people need to make mistakes to keep Sbar away from S*. If negative losses (positive gains) are allowed in the loss function, then people are sometimes encouraged to keep Sbar away from S*. Notice that if the opposite is true (S* << Sbar) then people should choose an Si = 2 x S*.
Li = ((2S* - Sbar)(2S* - S*)) = 2 x S*^2 - S* x Sbar
This time suppose S* is 100 and Sbar is currently 1000. i starts at 1.
L1 = 2 x 10,000 - 100 x 1000 = -80,000
Posted by: Frank Restly | September 08, 2015 at 12:59 AM
Frank, here there's nothing special about negative losses. All that matters is that for some si, loss is less than for some other si. You can add an arbitrarily large constant to that loss function if you want. It's the ordering, not the sign that matters.
If there are no errors and initial choices are s* then everyone will keep playing s*. When you say above "Suppose s* is 1000 and sbar is currently 100" you are implicitly assuming that some mistakes have been made.
Posted by: notsneaky | September 08, 2015 at 01:58 AM
Ah, I think I see what you're saying. I'm tripping up over the same thing that got jonathan momentarily confused above. s* is not actually the optimum since the function can go negative. Got it.
Posted by: notsneaky | September 08, 2015 at 02:09 AM
Nick Rowe: "Min: you lost me, sorry."
Let me analyze Game A more carefully. :) We have this loss function for player, i:
Li = (Si-Sbar)2 + (Si-Sbar)(Sbar-S*) = (Si−Sbar)(Si−S*) ≥ 0
The equation is an algebraic simplification and the inequality is because a loss function is non-negative. As it turns out, it is easy for each player to minimize the loss function by choosing Si = S*, since the minimum is 0. However, let us look at the loss function itself. In fact, let us look at all of the loss functions.
If Si = S* then Li = 0, and the inequality is satisfied. If Si > S*, then Si > Sbar, and if Si < S*, then Si < Sbar. A priori, we do not know how close any S* will be to Sbar, but if S* − Sbar = ε > 0, then Si − Sbar ≥ ε for every Si > Sbar, and similarly for Si < Sbar. I said that to be sure that each Li was, in fact, ≥ 0, regardless of player i's choice of Si, Si had to equal S* or Sbar, or S* had to equal Sbar. Now, it is possible, as indicated above, for each Li to be non-negative when none of those conditions hold, but it is not possible to guarantee it if the choice of Si is unconstrained. And you have to guarantee it for each Li to be a loss function.
In real life, I don't think that you can guarantee it, and so the Li's are not really loss functions.
As others have also pointed out, in game B it is easy for Li to be negative, so it is not a loss function, either.
Posted by: Min | September 08, 2015 at 03:18 AM
I think this may be clearer about Game A. If S* != Sbar, then it is possible that for some i, Si falls in between S* and Sbar, and then Li < 0, and Li is not a loss function. :)
Posted by: Min | September 08, 2015 at 03:36 AM
notsneaky: unfortunately, it's harder than that. let n=100. Consider the 7th player in the sequential game. Sbar is the weighted average of three things:
1. The S of the 6 previous players;
2. his own S(7);
3. his expectation of the S of the 93 remaining players.
If n is large, we can ignore 2, and we can maybe (though I'm not sure of this) ignore the effect of 2 on 3, but we can't ignore 3.
Min: put a minus sign in front of the loss function, and call it a utility function, if you like.
Posted by: Nick Rowe | September 08, 2015 at 08:47 AM
I see. I was trying to set it up so as to not have to deal with 3.
So player 7's payoff is the loss they experience in period 7 (when they make a choice) plus the loss they experience in periods 8+. In those 8+ periods they're done making choices but still are affected by the choice of others. I'm assuming there's some discounting going on in there to make that sum finite. Is that how it's suppose to work?
Posted by: notsneaky | September 08, 2015 at 09:16 AM
notsneaky: yep, if my games are to represent the NK model, we have to include 3. Since what matters is the individual firm's P(i) relative to the general price level P.
We don't need discounting to make the sum finite, since we divide the sum by n.
Posted by: Nick Rowe | September 08, 2015 at 10:03 AM
But it's the sum of Li's, not the sum of si's.
So it'd be something like this: V_t(s(t)) is the value function for player moving at time t. The Bellman is deceptively simple
V_t=L(s(t))+b*EV_(t+1)
(maybe the constraint that r(t)=s(t)+e(t) needs to be made explicit)
The first order condition is then
Sum{j=0,inf) (b^j)*(dL(t+j)/ds(t))=0
s(t) appears in L(t+j), j>0 because of its effect on the average.
To be able to even begin figuring this out I think you'll probably need to assume that the errors, e(t)'s, are bounded (so they can't be white noise - this is an assumption made in all the NK model which I always thought weird and I've never seen anyone really dig into that)
Second, I think at that point you pretty much have to assume that the optimal sequence s(t) is bounded otherwise this isn't defined, even with discounting. That pretty much rules out Game B, unless you put those lower and upper bounds in there.
(there's also another issue and that's whether or not decision makers anticipate that they'll make an error. I.e. "if I choose s(t), the actual variable will depend on s(t)+e(t), so I should account for that. If I was risk neutral this wouldn't matter but I've got a quadratic loss function so...")
Posted by: notsneaky | September 08, 2015 at 10:49 AM
Nick Rowe: "Min: put a minus sign in front of the loss function, and call it a utility function, if you like."
OK, but the (Si−Sbar)2 is a dead giveaway for a loss function, since squaring the difference makes the term non-negative. In addition, the (Si−S*) factor makes it easy for the players to set Si = S* to obtain the minimum loss of 0, which is the desired theoretical result, right? So I suspect that the Li's for game A were originally intended in New Keynesian theory to be loss functions, as advertised.
Posted by: Min | September 08, 2015 at 01:01 PM
Let's go back to the game-as-specified, without dividing it into rounds. We will, however, add a degree of Bayesian expectations.
We'll start the game with a prior distribution for Sbar, such that we believe Sbar to be a random variable with mean S0 plus some small random ε. That S0 is a prior fixation point -- perhaps a previous Sbar, or perhaps a number whispered to everyone upon entering the room.
Now, expand the standard loss function in terms of ε to get Li = (Si-S0)^2 + (Si-S0)*(S0-S*) - 2*(Si-S0)*ε - (S0-S*)*ε + (Si-S0)*ε + ε^2 - ε^2 = (Si-S0)^2 + (Si-S0)*(S0-S*) - (Si-S*)ε.
This loss function has a couple of mathematically nice features: there is no ε^2 term so our expected loss is not influenced by the uncertainty in our belief about Sbar, and the loss-from-mean-error term depends only on Si and S* and not S0.
Now, our expected loss is given by taking the ensemble mean of the above, which means the ε term drops out with the assumption that it is zero-mean. That gives <Li> = (Si-S0)^2 + (Si-S0)*(S0-S*), which in turn is minimized for Si = 1/2*(S0+S*).
Now, this is our "irrational expectations" solution, or what we get if we assume everyone else is an idiot. We can extend this result to partially rational expectations by modifying our prior. Rather than assume Sbar = S0 + ε, we'll assume that Sbar = (1-γ)*S0 + γ*Si + ε, where 0 ≤ γ ≤ 1 and Si is the value we get by running through the above. We can do so easily by replacing S0 with S' = (1-γ)S0 + γSi, which in turn gives Si = 1/2*((1-γS0 + γSi + S*), or after a bit of algebra Si = ((1-γ)*S0 + S*)/(2 - γ). As γ → 1, Si → S* and our expectation about Sbar also → S*.
Better yet, this is a "comfortable" equilibrium. Rationally, if we think that our guess is very far away from S0, our expectation about ε would also increase -- but provided it is still zero-mean it does not affect our expected loss.
This changes for the abnormal loss function. Expanding in terms of ε again gives Li = (Si-S0)^2 + (Si-S0)*(S*-S0) - 2*(Si-S0)*ε - (S*-S0)*ε - (Si-S0)*ε + 2*ε^2 = (Si-S0)^2 + (Si-S0)*(S*-S0) - (3*Si + S* - 4*S0)*ε + ε^2.
Now, under the ensemble mean we get <Li> = (Si-S0)^2 + (Si-S0)*(S*-S0) + <ε^2>, which is an uncomfortable loss function: we expect to lose in proportion to the uncertainty of our expectation. Still, the optimum choice of Si is only given by the deterministic part, giving us 2*(Si-S0) = (S0-S*) or Si = 1/2*(3*S0-S*) for irrational expectations.
Blending our prior with our choice of Si again gives Sbar = (1-γ)*S0 + γ*Si + ε and S' = (1-γ)S0 + γSi, but this time the algebra gives us Si = (3*(1-γ)*S0 - S*)/(2 - 3*γ).
This is terrible news. We still recover Si = S* for γ = 1, but getting there is a problem. If we don't assume that the game is more than 2/3 rational, our optimum play has the opposite sign of the near-fully-rational optimum. Even worse, near γ=2/3 we don't even have a convergent result, so any Si (and by extension Sbar) is a possibility, strongly suggesting that we'd also update our ε to one with higher variance.
In turn, that is extremely uncomfortable, because with this loss function uncertainty about our expectation of Sbar translates directly into a greater expected loss.
Posted by: Majromax | September 08, 2015 at 01:12 PM
notsneaky: "But it's the sum of Li's, not the sum of si's."
Ah! That's what you meant. I had in mind a simpler model. Firms take turns to announce their prices in advance, then when all firms have announced their prices, there is one period in which customers buy goods, and the Li are revealed, then the game ends. So only one Li for each firm, and no discounting is needed.
"To be able to even begin figuring this out I think you'll probably need to assume that the errors, e(t)'s, are bounded (so they can't be white noise - this is an assumption made in all the NK model which I always thought weird and I've never seen anyone really dig into that)"
Simplest(?) assumption: each firm has a probability p of choosing the loss-minimising Si, and a probability 1-p of a trembling hand, where Si has a uniform distribution between the upper and lower bounds. We solve for the equilibrium, then take the limit as p approaches one, and see if it approaches S*.
Posted by: Nick Rowe | September 08, 2015 at 05:43 PM
Nick, wouldn't that be easier to solve since you could just do it by backward induction (for finite #, then take limit)?
There's N players. Nth player observes all the previous values. Chooses sN based on current average plus expectation of error. N-1 th player anticipates Nth player's choice (in expectation), chooses sN-1. And so on. Going all the way back you're going to get
s1=s*+f(N)sigma where f(N) is some function of N (I'm guessing a ratio of polynomials of Nth degree in N) and sigma is the variance of the error term (assuming iid. It'd be a real mess if errors were autocorrelated, but more interesting also). Then you take the limit f(N) as N goes to infinity. Or compute s(t,N)-s* and see if this gets bigger or smaller as t goes to N.
With a finite number of players we don't have to worry about the errors being bounded either since it's a finite number of equations in the same number of unknowns.
Tedious, very tedious, but doable.
Posted by: notsneaky | September 09, 2015 at 12:40 AM
notsneaky: "Nick, wouldn't that be easier to solve since you could just do it by backward induction (for finite #, then take limit)?"
Yes. But as you say, tedious.
Or, maybe some proof by contradiction?
Or, eyeballing that reaction function graph for game B, start with p=1 (where p is probability of trembling hand and uniform distribution of S between Sl and Su), then slowly reduce p, and watch the distribution start to mass up at Sl and Su, because the reaction function is non-linear, so if E(Sbar) is anywhere near Sl (or Su) you play Sl (or Su) if your hand does not tremble. So in the limit, as p approaches zero, the distribution is bimodal at Sl and Su. (I'm not sure if that was clear.)
Posted by: Nick Rowe | September 09, 2015 at 05:34 AM
It's gonna be some ratio of polynomials. So you could do the case N=3 (I think N=2 might not work), N=4, maybe N=5 and see what the sequence of the coefficients is.
Posted by: notsneaky | September 09, 2015 at 09:41 AM
Nick, I’m not sure what you meant in the OP by an equilibrium being more or less plausible. Plausibility is not a feature of a mathematical result. And I haven't understood all the math that followed.
But if what you meant was that the games are supposed to represent stylized models of the real economy in equilibrium, and that game A was a more plausible model of the real economy than game B, then I think the argument is pretty clear. It is not plausible to represent the economy as being in a state which it would not be in if it experienced small shocks, because the real economy is always experiencing small shocks.
Posted by: Heath White | September 10, 2015 at 11:50 AM
Heath: solving for the Nash equilibrium is simply math. But in some cases, game theorists find the Nash Equilibrium to be...implausible. For example, where someone makes a conditional threat "if you do A I will do B" that it would not be rational to follow through with. It all has to do with counterfactual conditionals -- what one player would do if another player made a non-equilibrium move. The idea behind trembling hand perfect equilibria is to deal with those counterfactual conditionals, by supposing there is a vanishingly small probability that every possible move (whether rational or not) does get made.
Posted by: Nick Rowe | September 10, 2015 at 06:05 PM
Nick,
I believe that both ARE in fact trembling hand perfect. The issue with trembling hand perfection is whether or not you can shrink the chance of error arbitrarily and have an equilibrium approach the equilibrium you have in mind. In both cases I believe you CAN do this and that means that both interior equilibria are THP. Essentially, this is because if there is some small chance of each other player choosing a value of s not equal to s*, then
1. If the probability distribution of their errors is symmetric around s*, I think the best response will still be s* (didn't do the math but this seems right).
2. Even if it is NOT symmetric and you know there is a greater chance of a higher or lower sbar, your best response will still be "in the neighborhood" of s* and it will get closer to s* as you shrink the probability of error.
The way to go about the analysis (I believe) is to do something like this: Let each player select a target: sti. Then let sbar=average[sti+ei] where ei is distributed somehow. Then solve for an interior NE. Then collapse the distribution of ei to zero and see if you have an equilibrium which approaches s*. I suspect you will.
In order to eliminate an equilibrium as not THP, you basically need a tiny possibility of error to make everyone's best response jump to one of the extremes. You have it moving away slightly. In one game the best response will be between s* and sbar and in the other it will be outside of that range but in either case, it will approach s* as the expected error approaches zero (again, I believe). I think what you really want is to invent another equilibrium concept. This could be a big deal if you could. I will think about it some more. If I come up with anything, I'll let you coauthor :)
Posted by: Mike Freimuth | September 15, 2015 at 03:03 AM
For what it's worth, here is a variation where I think you can use THP to eliminate the middle equilibrium.
Assume the number of players is so large that any one player cannot affect sbar (or else assume that their payoffs are dependent on sbar-i which is the average of all other players s). Then let players try to maximize (it's easier to for me to think in terms of maximizing for some reason but obviously you can take the inverse and minimize it) of U=(sbar-s*)^2(si-sbar)^2. Then if everyone plays s*, they will all be indifferent between all s and so it will be a NA. However, if there is any small chance that others will make a mistake, then everyone will want to choose sl or sm. It won't converge to s* for any convergence of the error (that I can see).
I doubt this is of any use to your point about macro (off the top of my head, I can't think of an analogous slight variation on the payoff function that would be THP) but it might help to illustrate the concept of THP. An interesting, though probably off topic aspect of this is that you could also have a NE where half the people chose sl and half chose sh (or one in which everyone mixed with probability 1/2) (this is assuming that s* is half way between sl and sh but you get the idea). This equilibrium, I suspect (and the degree of speculation here is increasing dramatically) WOULD be THP. And yet, you would still have a case where if one person defected to the other side, everyone else would want to follow.
For the record, I am doing this very cavalierly so mistakes are likely.
Posted by: Mike Freimuth | September 15, 2015 at 03:38 AM
Mike: thanks for this.
I will come back to it later today.
Posted by: Nick Rowe | September 15, 2015 at 06:31 AM
Mike: thanks again.
Some thoughts:
1. I understand your variation, in your second comment, but it doesn't really work to make my point about macro. Because in my game B, each individual has an interior optimum for any given Sbar, as long as Sbar does not get too close to Sl or Su.
2. "Even if it is NOT symmetric and you know there is a greater chance of a higher or lower sbar, your best response will still be "in the neighborhood" of s* and it will get closer to s* as you shrink the probability of error."
I think I get what you are saying there, but we must be careful. It is important that the trembling hand error could be large, even if the probability of that error is small. Anything is possible, even if improbable. That's why I want to think in terms of a uniform distribution of errors, when an individual's hand trembles.
3. Let me sketch my first thoughts towards a proof that S* is not THP:
Let the slope of the reaction function be "b". (So b=1.5 in my Game B)
Assume the players move sequentially (each observes previous moves) in a one-shot game.
Assume only one player has a hand that trembles, but it could be any player. Call that player t, and let that player be t in line to move. So 1 < = t < = n. And player t makes an error e.
Suppose that S* is a THP equilibrium (I'm trying to derive a contradiction).
If t=n (the trembling hand player is the last player) then all other players choose S*, and so Sbar=S* + e/n.
If t = n-1, then the last player will also deviate from S*, so Sbar = S* + e/n + (be/n)
If t = n-2, then the last two players will also deviate from S*, and the second from last player will know the last player will deviate, so Sbar = S* + e/n + 2(be/n) + 2(bbee/nn) + 2(bbbeee/nnn) + etc. [I think I got that right.]
Now for general 1 < t < n, (with large n) we know that if 0 < b < 1 that infinite sequence converges to Sbar = S* + [(n-t)/n][1/(1-b)]e , (just like the Old Keynesian multiplier, where a fraction [(n-t)/n] of the population respond to the shock).
But if b > 1 (like in my Game B), that infinite sequence may not converge to a finite number. Which means that Sbar does not approach S* in the limit as e approaches zero, provided the player whose hand trembles comes early enough in the sequence. In fact, if t=1 (the first player's hand trembles), then b > 1 is sufficient for non-convergence.
Which contradicts my initial assumption that S* is THP.
I *think* I've got that (roughly) right.
Posted by: Nick Rowe | September 15, 2015 at 09:26 AM
I think what you are saying here makes sense, it's just not the definition of THP, it's something else all together. Here is from Wikipedia.
"First we define a perturbed game. A perturbed game is a copy of a base game, with the restriction that only totally mixed strategies are allowed to be played. A totally mixed strategy is a mixed strategy where every pure strategy is played with non-zero probability. This is the "trembling hands" of the players; they sometimes play a different strategy than the one they intended to play. Then we define a strategy set S (in a base game) as being trembling hand perfect if there is a sequence of perturbed games that converge to the base game in which there is a series of Nash equilibria that converge to S."
You are making a variation which is a dynamic game and you are talking about what it converges to (or doesn't converge to) over time. That is a different convergence than what the equilibria in the perturbed game (the one with errors) as it converges to the base game. This means that you can't assume a uniform distribution of errors, because you have to make it converge to the base game. (Although, I'm not sure that matters for my point anyway. If it's a simultaneous move game then a small probability of a large error will still cause what I would call a "local" shift of the equilibrium, unlike my game where a small chance of error causes everyone to jump to the extreme values. That's what you need to eliminate something with THP. So notice that the reaction functions in my model is not smooth, it is vertical at s* and horizontal everywhere else.)
I realize that the convergence of s over time is what youa are actually interested in, so you are probably approaching it along the right lines, but it's just not a case of THP. Once you do it as a dynamic game with sbar (potentially) changing over time, I think you may actually be able to use simpler terms. I'm not really sure what those terms are but it seems like that becomes something a lot like what we did in first year macro (also in diff eq/linear algebra) where you draw those graphs with time on the horizontal and s (in this case) on the vertical and you fill in arrows everywhere showing all the paths over time depending on where you start and you get some horizontal lines that are "equilibria" but sometimes the arrows funnel into them and sometimes they funnel away from then. Then you call them "stable" or "unstable" or something like that. It seems like what you need to do is get the setup of the game right so that you can describe it in those terms and not need THP.
As an aside, you don't really run across THP that often in the literature. I suspect it's because it's kind of a thorny concept due to the "if there is A sequence of games that converges" part of the definition. It can be kind of annoying to prove that no such sequence of games exists when a game gets even moderately complicated. Plus it isn't really that strong in the end (it basically only rules out a pretty specific type of equilibrium). A lot of the stuff that you kind of feel like should be ruled out it doesn't eliminate (as in your case).
Posted by: Mike Freimuth | September 15, 2015 at 03:04 PM
Thanks again Mike.
"You are making a variation which is a dynamic game and you are talking about what it converges to (or doesn't converge to) over time."
I am not sure if we understand each other on that point. The NK macro model is indeed a dynamic game, that continues forever, with each player making multiple moves. But the game I am considering here is a very simplified stylised version of the NK game. My game has only one "period"; each player moves only once, but they take turns to move. So I am not really considering convergence over time. Firm 1 sets its price, firm 2 sets its price,...,firm n sets its price, then the customers observe all the prices, and decide how much to buy from each firm, and then time ends.
Posted by: Nick Rowe | September 15, 2015 at 07:59 PM
OK, I see what you mean but in that case, it's not even clear off the top of my head what is an equilibrium in the base game. (Note that this is a significant change from the game in the OP Once you know that, to determine whether it is THP or not you would still have to define a perturbed version of that base game and solve the perturbed game for a NE and see if it converges as the rate of error gets small. This might be difficult.
It seems like you might actually want to do something like what I was thinking. For instance: Each period one (or some number of) firm(s) set their price and receives a payoff that period which is a function of their price relative to the average price of all other firms who acted previously. Then you will have the case where, if their optimal action is to price in between s* and sbar, it will converge over time to s*. If their optimal action is to price outside of that range, it will diverge. You could make their payoff depend on actions taken after they move also (as you mentioned in a previous comment) but then it starts getting complicated unless you can come up with some kind of recursive relationship that will reduce it to something convenient. It seems like this might be kind of close to what already occurs in a NK model but I'm not fully clear on that or on exactly how your fundamental point fits into that analysis.
When I look at this, I kind of feel like there may be a policy application as well. Every month central banks set policy sequentially after observing the policy choices of each previous central bank, etc. If the FED sets policy "too tight" or "too loose" what effect does that have on Canada's optimal policy? Does that effect cause some kind of spiral toward the zero bound on one side or hyperinflation on the other? (For the record I think not, but that's because I already think I know what causes a downward spiral toward the zero bound and it's something different.)
Posted by: Mike Freimuth | September 15, 2015 at 08:23 PM
Mike, in the game I have in mind, Sbar is the (expected) average S of *all* the firms, both those that have already set S and those that have not yet set S.
Suppose n=101. The 51st firm will set its S = S* + b[0.5 Average S of the previous 50 firms + 0.5 E{average S of the remaining 50 firms}].
In other words, Sbar is partly backward-looking and partly forward-looking.
Posted by: Nick Rowe | September 15, 2015 at 09:51 PM
Yeah, I think I get what you have in mind now. But do you know what the equilibrium(a) is (are)?
Posted by: Mike Freimuth | September 15, 2015 at 10:18 PM
Mike: the Nash Equilibria are S*, Sl, and Su. But I think the only THP equilibria are Sl and Su (the lower and upper bounds on S).
Translated into macro-speak, the lower and upper bounds are when the central bank changes the game (adopts QE, for example) because inflation has got too low or too high.
Posted by: Nick Rowe | September 16, 2015 at 03:54 AM
Nick Rowe: "in the game I have in mind, Sbar is the (expected) average S of *all* the firms, both those that have already set S and those that have not yet set S."
If you mean "expected" in the mathematical sense, the expected average may not exist.
Posted by: Min | September 16, 2015 at 11:35 AM
Nick, I've played around with this and the closest I can get to it is something like this:
There are N players from 1,...,N. Each player makes an optimal choice based on previous choices and anticipating the choices of those who come after them. e(i) is the error made by player i, s(i) is their choice. The optimal strategy of player N-t is then
s(N-t) = a(N-t) Sum_i{1,N-t-1} (s(i)+e(i)) + b(N-t) s*
where a(N-t) and b(N-t) are ratios of polynomials in N. a(N-t) is a t+1 degree polynomial divided by a t degree polynomial - but there's also a sum of N-t-2 e's and s's there so effectively you have something that's t+1 degree over t+1 degree. b(N-t) is t degree over t degree. So if you take limits there with respect to N, they'd go to a constant.
So it's like an ARMA process except the coefficients depend on N and t. You solve that backwards you get something like
s(N-t) = Product_i{1,N-t-1} (1+c(i)) e(i) + b(N-t) s*
where c(i) is a polynomial ratio of the same degrees as the a's. Note the 1 in the parentheses. This means that all the errors get amplified by subsequent choices. It's an explosive process.
The one weird thing, which I mentioned above, is that keep getting that lim b(N-t) as N goes to inf is 1/2 not 1. But maybe I'm dropping something somewhere.
Posted by: notsneaky | September 16, 2015 at 02:00 PM
Nick, we may be talking past each other a bit here so let me back up and run down what we have here as I see it.
1. In the OP you have a simultaneous move game which is fairly simple in which the NE are the three that you mentioned in the post as well as your last comment.
2. What you describe in the comments is a different game in which firms move sequentially. This is a significant difference from the original game described in the post. In this game you need to find a sub-game perfect NE in which players predict the actions of future players. Note also, that you can't really use a traditional game tree since the strategy space is continuous. However, this game is probably manageable and they equilibria you describe probably are equilibria *when there are no errors.*
3. Whether they are THP or not requires you to solve a perturbed game in which they make mistakes with some probability. This is the game that will be difficult to solve. It might not be prohibitively difficult but off the top of my head, it's not clear that it isn't. Then when you shrink the probability of error to zero, if there is an equilibrium in the perturbed game that approaches the equilibrium in the base game, that equilibrium is THP. (Note that it is THP if there is ANY way to collapse the distribution of errors to zero which causes the equilibrium to approach the proposed equilibrium.)
Not also that the equilibrium to the sequential move game with errors will specify a contingent plan of action for each player dependent on sbar when it is time for them to act. This plan may be different for each firm depending on their position. This means that the equilibrium path of sbar may not be deterministic (it may be a function of the errors). You need to distinguish between a tendency of sbar to drift toward s* or away from s* as the game progresses from the tendency for the equilibrium *strategies* to converge to the proposed equilibrium strategies in the base game as the probabilities for error approach zero. I suspect (though I may be wrong) that the former is more along the lines of what you have in mind. The latter is what determines THP. I suspect (and this is very speculative) that if you can solve the perturbed game, all three equilibria in the base game will end up being THP. It will probably be somewhat difficult to solve though.
Posted by: Mike Freimuth | September 16, 2015 at 08:20 PM
P.S. This may seem nitpicky but it's usually worth the trouble of being specific about things like this in these game theory conversations. The equilibria in the sequential move game (with no errors) wouldn't be s*, sl, and sm. They would be something like: play s* if sbar'=s*, play sl if sbar's* for all firms after the first where sbar' is the average s up to that point and then play either s*, sl, or sm for the first firm. (I'm doing this very casually, so I may not be getting the equilibria right but the idea is that you have to specify a complete plan of action for each firm contingent on the information they have at the time.) This likely means that the first firm will determine the actual path (probably a constant level) of s and so if one level is preferable to the first firm, they will be able to select that level. This will actually probably rule out s* as a solution to this game all together.
Posted by: Mike Freimuth | September 16, 2015 at 08:29 PM
Mike: "1. In the OP you have a simultaneous move game which is fairly simple in which the NE are the three that you mentioned in the post as well as your last comment.
2. What you describe in the comments is a different game in which firms move sequentially. This is a significant difference from the original game described in the post. In this game you need to find a sub-game perfect NE in which players predict the actions of future players. Note also, that you can't really use a traditional game tree since the strategy space is continuous. However, this game is probably manageable and they equilibria you describe probably are equilibria *when there are no errors.*"
Agreed. Except, in the OP, I did also consider the sequential variant on the simultaneous game (see the bolded paragraph in the OP). And on thinking it over, in the comments, I came to the conclusion that the sequential variant is probably the more promising game from which to explore the question of THP. (Plus, the sequential variant is closer to the NK macro model, so it's more interesting for that reason too.)
"You need to distinguish between a tendency of sbar to drift toward s* or away from s* as the game progresses from the tendency for the equilibrium *strategies* to converge to the proposed equilibrium strategies in the base game as the probabilities for error approach zero. I suspect (though I may be wrong) that the former is more along the lines of what you have in mind. The latter is what determines THP."
Good point. Agreed. I think I was trying to do the latter, though I may not have been clear on this.
Posted by: Nick Rowe | September 16, 2015 at 09:36 PM
Mike: "This likely means that the first firm will determine the actual path (probably a constant level) of s and so if one level is preferable to the first firm, they will be able to select that level."
Hmmm. Good point.
Posted by: Nick Rowe | September 16, 2015 at 09:42 PM
notsneaky: The way you are setting up the problem sounds right to me. I'm not following the math, but that's my fault.
Could you take a look at my September 15 09.26 am comment please, where I make my own attempt at the math. Mine looks simpler, (if it's right).
Posted by: Nick Rowe | September 16, 2015 at 09:45 PM
Hold on, some stuff coming up...
Posted by: notsneaky | September 16, 2015 at 10:16 PM
Sept 15 09.26 looks right (for big N), although I think it's important to separate out the choice made by a person, s(i), from the outcome, say r(i), which is the sum of the choice and the error r(i)=s(i)+e(i). So that should be almost right, except that person i+1 will take into account the past errors that have been made (all the e(j) for j < i) but not the future errors that will be made (assuming those are mean zero). So their choice will be s* + weighted average of past s and future s + weighted past errors. Hence a person further down in the "queue" will have more past realized errors to deal with.
Posted by: notsneaky | September 16, 2015 at 10:22 PM
Ugh, inequality cut off comment again.
...all the e(j) for j less than i. So actual optimal choice is something like s=(a1)s*+(a2)(past s + past e). a1 and a2 are coefficients. The reason why "future s" do not appear in there is because they can be solved out (i.e. they're there we just iterate until they're gone).
Posted by: notsneaky | September 16, 2015 at 10:25 PM
Nick, I get what you are saying, I'm just trying to be clear about the various versions we are kicking around here, I'm not trying to play gotcha regarding who said what when. My main point is just that you need to be pretty careful about extending the equilibrium from the simultaneous move version to the sequential move version. Things can change a lot and when you try to add errors and do THP, it gets complicated pretty quickly. Ultimately I don't think THP is the concept you need to make your point.
Posted by: Mike Freimuth | September 16, 2015 at 10:27 PM
Then actually since we know that if nobody ever makes any errors, the optimal choice is in fact s(i)=s* for all i, we got to have a2=(1/(i-1))*(1-a1). Those a1's are functions (rational functions) of N and i. We can also figure out that the very first person to move will choose s(1)=s*. This is because no errors have been made yet and all future errors are zero in expectation, hence s* is the best they can do in expectation. But after that the realized errors start accumulating...
Posted by: notsneaky | September 16, 2015 at 10:30 PM
Ok. Let me give it another stab. Correct some things and be less sloppy.
First, as Mike says this has nothing to do with a Trembling Hand Equilibrium, nor does it have anything to do with some kind of special new equilibrium. It's just a regular sequential game with uncertainty in it. It's just a boring Subgame Perfect Nash equilibrium (in expectation). If you draw the game tree out then you have Player 1 make a move, then a player called "Nature" makes a move, then Player 2 makes a move, then Nature makes a move, then Player 3 makes a move and so on. But because we want to consider the case where the number of players, N, is large, computationally it's a big pain. Actually since it involves ratios of polynomials and since each player anticipates future moves by others, the order of these polynomials increases quite quickly so even in low-N case, like 4, it's messy.
Second, the question of whether s(i)=s* is an equilibrium is ill posed. A strategy here is a contingent plan. "If nature chooses e(1), e(2), ..., e(i-1), and previous players choose s(1), s(2),...,s(i-1), then I will choose s(i)". If nature chooses different e's and players choose different s's I will choose a different s(i). Etc." So is s(i)=s* an equilibrium? Of course not, unless all errors that have already happened are zero.
So we have to rephrase the question in a way that makes sense. For example, "before the game begins is the expected choice of person i, Es(i)=s*"? (Actually yes if errors are mean zero). Or even "after person 1 has made a move and nature has chosen the first error, is the expected choice of person i, for i>1, Es(i)=s*? If not, given e(1), how much does it deviate from s*? Does this deviation increase with i and N? Does the variance of s(i) increase? Will the errors cancel out in s(i) for big enough i and N? Given that people make errors can we expect that the choices of s(i) will stay within some neighborhood of s*? Or will they - even if the errors themselves are bounded - diverge further and further out? Etc.
Ok, math next. But I'll post that later so as not to flood the comment thread.
Posted by: notsneaky | September 16, 2015 at 10:37 PM
notsneaky, you're getting closer to the right approach here but this part is a little off:
"before the game begins is the expected choice of person i, Es(i)=s*"? (Actually yes if errors are mean zero). Or even "after person 1 has made a move and nature has chosen the first error, is the expected choice of person i, for i>1, Es(i)=s*? If not, given e(1), how much does it deviate from s*? Does this deviation increase with i and N? Does the variance of s(i) increase? Will the errors cancel out in s(i) for big enough i and N? Given that people make errors can we expect that the choices of s(i) will stay within some neighborhood of s*? Or will they - even if the errors themselves are bounded - diverge further and further out? Etc."
You need to start at the end and ask "what is the last firm's optimal behavior, contingent on sbar at that point? Then you have to go one person *forward*. This will be kind of annoying because they need to anticipate the action of the last person being what you found as a result of the first question. However, this will be probabilistic if there are errors. So you have to figure out the *expected* behavior by the last person given any choice by the second to last player and then find the optimal strategy by the second to last player *for every value of sbar up to that point*. Now go forward one more person and do this again. But, of course, every time you go one person forward, figuring out the probabilities of everything that could happen in the future will become more complicated. It won't take long for this approach to become unwieldy. If you really want to solve it, I would do it with two people. Then try doing it with three people. At point you may notice a pattern that you can expand to a n-person game. If you don't figure out such a pattern, a brute-force approach with a large n will probably be pretty difficult.
Posted by: Mike Freimuth | September 17, 2015 at 12:33 AM
I skipped this part because I forgot that it's not necessarily obvious but when you get through all of the last n-1 actors, you figure out what value of s maximizes the first guy's expected payoff (or minimizes their loss function or whatever) and then everything follows from that. This is why you can't start with "before the game begins is the expected choice of player 1......whatever" The behavior of player 1 is not random, it's determinate, (the error is random but that's something else) you just have to figure out what everyone else will do in every situation in order to actually determine it.
Posted by: Mike Freimuth | September 17, 2015 at 12:38 AM
First some notation since it's hard to write math in a blog comment. N players. s_{N-t} is the choice made by player who moves t periods before the last person. So s_{N} is the last person to move, s_{N-1} is the next to last person ... up to s_{1}. Likewise e_{N-t} is the error made by the person who moves t periods before the last one. Then define A_{N-t} as the sum of all s_{i} for i up to N-t, and likewise, V_{N-t} the sum of all e_{i} for i up to N-t. I'm going to use capital S instead of s*, so that I can use * as a multiplication sign. a_{N-t}, b_{N-t} and c_{N-t} (lower case) are going to be coefficients which depend on N and t.
We solve it by backward induction. First we minimize the loss function of the Nth person who gets to observe all the previous choices and errors. The optimal choice is
s_{N}=((3N-4)*(A_{N-t-1}+V_{N-t-1})-N*(N-1)*S)/(2*(N-1)*(N-2))
Person N does anticipate that they will make an error, e_{N}, but if the error is mean zero and uncorrelated with previous errors, then this will not affect their choice.
So we have s_{N} as a function of s_{N-1}, s_{N-2}, ..., s_{1}, and also the errors and S. We take this s_{N} and plug it into player N-1's loss function and maximize that with respect to s_{N-1}, making sure not to forget the effect of s_{N-1} on s_{N}, which is (3N-4)/(2*(N-1)*(N-2)). This is where it's start getting messy but basically we get s_{N-1} as a function of s_{N-2}, s_{N-3}, ... , s{1} as well as all the errors up to N-2 and S. To fully solve it we would keep doing this but the algebra gets very convoluted. However we can figure out what kind of functions the coefficients on all the lags, for s's, e's and S, are. For s_{N-1} we have
s_{N-1}=(a_{N-1}*(A_{N-2}+V_{N-2})-(b_{N-1})*S
We can check that if all the past errors are zero then s_{N}=s_{N-1}=S, so we're on the right track.
a_{N-1} and b_{N-1} are ratios of polynomials in N. Let's leave them alone for now.
More generally, for s_{N-t} we have
s_{N-t}=(a_{N-t})*(A_{N-t-1}+V_{N-t-1})-(b_{N-t})*S
So it's like an ARMA(N-t,N-t) process with time varying coefficients. If we take limits of a_{N-t} and b_{N-t} as N goes to infinity they converge to some constants (possibly 0), which means that for large N we could treat this as just a regular ARMA process. Anyway, iterating backwards we get
s_{N-t}=[(a_{N-t}*Product_i{1,N-t-1} (1+a_{N-t-i})]*s_{1}-{big coefficient}*S+MA_{N-t}
where we don't have to worry about {big coefficient} too much for reasons explained right below and MA is the moving average term that has all the errors from 1 to N-t-1 in it.
So we have s_{N-t} as a function of N, t, all the errors up to N-t-1 and the first choice made, s_(1). But we know that if all the errors are zero then the optimal choice is always S, so we have to have
s_{N-t}=[big mess]*s_{1}-[big coefficient]*S=S
Since this has to hold for t=N-t-1, and there are no errors before 1, we know that s{1}=S. So [big coefficient]=[(a_{N-t}*Product_i{1,N-t-1} (1+a_{N-t-i})]-1.
Okay, let f(N,t)=[(a_{N-t}*Product_i{1,N-t-1} (1+a_{N-t-i})]. So we have so far
s_{N-t}=f(N,t)*s(1)+(1-f(N,t))*S+MA=S+MA
So we're reduced the choice of player to just the value S and all the errors that have been made before they make a move.
What is this MA term? Again, iterate to get
MA_{N-t}=(a_{N-t})*(e_{N-t-1})+(Sum_i{2,N-t-1} of c_{N-t-i}*e_{N-t-1})
That still leaves the coefficients in the sum, c_{N-t-i}. These are given by
c_{N-t-i}=(a_{N-t})*Product_i{j,i-1} (1+a_{N-t-j})
Like I said, it's a big mess. But that's pretty much the solution right there.
Now we can take expectations. In particular note that if we take expectation "before the game starts" then Es_{i}=S. The errors are unpredictable and mean zero so that's not surprising. If somehow every player had to commit to a choice before observing errors and other's choices, they'd choose S.
The more interesting question is what happens as N goes to infinity to the coefficients and their product, and what happens to the variance of s_{N-t} as t approaches N. Still working on that...
Posted by: notsneaky | September 17, 2015 at 12:48 AM
Let me try to clarify/re-state/modify my own view, in the light of Mike's and notsneaky's comments:
Assumptions: sequential moves game; the number of players N is very large, so each player ignores the effect of his S on Sbar; each player has a reaction function S = S* + bE(Sbar-S*); Nature moves only once, at some time T in the sequence of moves, adding a mean-zero shock e to Sbar ("Nature trembles player T's hand").
For 0 < b < 1 the equilibrium is straightforward. All the players who move before Nature moves play S=S*, and all the players who move after Nature moves play S=S*+b(Sbar-S*) where Sbar=S* + [(N-T)/N][1/(1-b)]e. Notice that Sbar approaches S* in the limit as e approaches 0. (So I want to say that S* is a THP equilibrium for 0 < b < 1.)
But, as b approaches 1, [1/(1-b)] approaches infinity, so we can no longer say that Sbar approaches S* in the limit as e approaches 0.
And, if b > 1, it is not true that the equilibrium Sbar is well-defined and approaches S* in the limit as the variance of e approaches zero. (So I want to say that S* is not a THP equilibrium for b > 1.)
Remember to put a space either side of < , to stop Typepad having conniptions.
Posted by: Nick Rowe | September 17, 2015 at 06:07 AM
Mike, yes, that what I did in the last comment. In the comment you responded to I was essentially skipping ahead to the solution. After we iterate backwards (or 'forward') as you say, each person's optimal strategy involves reacting to the choice of the first person (the person who moves first, not the person whose optimization problem we solve first) and all the errors that have been made up to that point. But for person 1, no errors have been made yet so their best choice, in expectation, is to just choose s*.
Posted by: notsneaky | September 17, 2015 at 09:37 AM
Nick, the difficulty with saying "N is large" before solving the game - as opposed to solving the game, then letting N be large - is that even though one's person's choice will not affect the average *in that period*, a forward looking player will realize that their choice will affect the choice of everyone who moves after them. Like in a grocery store queue, if I put an extra item in my basket that increases not just my time in the queue but also the time in queue for everyone standing behind me.
So even though my choice only changes the average today by something like constant/N, it changes it N-i times (everyone who's still going to move) overall, so the total effect is something like constant*(N-i)/N, which is no longer infinitesimal even as N goes to infinity.
Posted by: notsneaky | September 17, 2015 at 09:41 AM
Also - "All the players who move before Nature moves play S=S*" - just call all those players who move before Nature moves "Player 1" and let Nature move 2nd.
This is also a special case of the game I wrote out above where we just set all but one of the errors equal to 0
Posted by: notsneaky | September 17, 2015 at 09:45 AM
One more comment for now (sorry, I'm having fun with this).
Mike, the case N=2 won't work because then any s is an optimal strategy. I think someone pointed this out above. You need at least 3 players, maybe 4 (since first player's choice is going to be just s*)
Nick, let me see if I understand how you're posing the question. Let's say player 1 moves, then Nature chooses an error, then everyone else moves with no more errors. The question you're asking is "if we go forward to player t's choice will that error made t periods before be amplified or dampened?"
It seems like there's two questions here. One is, what happens to the choice of s as we move "far into the queue". That is, given a fixed N, how much will the choice of person N-t diverge from s given that there was an error made earlier on. This is basically asking what happens as t -- > 0.
The other question is what happens as we let N go to infinity. It's sort of confusing to try and do both things at once.
Posted by: notsneaky | September 17, 2015 at 10:25 AM
Actually come to think of it, having only one error at time T (say, 2) is not going to work here for you Nick. With N large, just like any one player's choice will have a negligible impact on Sbar, so will any single error.
Player 1 moves first and chooses s*. Nature moves and chooses e. Now it's player 2's turn. Suppose player 2 thinks everyone after them will choose s*. The actual average is s*+e/N which is approximately s*. So player 2 ignores e/N and plays s*. Then players 3,... reason the same way. You still have everyone playing s* if there's only one error. EVEN IF that error is NOT arbitrarily small (say, bounded away from zero)
What you need is that *everyone* can make an error so that their (weighted) average can have a chance of being non-negligible.
Posted by: notsneaky | September 17, 2015 at 11:34 AM
Nick,
I'm now pretty sure that I was on target when I said this:
"You need to distinguish between a tendency of sbar to drift toward s* or away from s* as the game progresses from the tendency for the equilibrium *strategies* to converge to the proposed equilibrium strategies in the base game as the probabilities for error approach zero. I suspect (though I may be wrong) that the former is more along the lines of what you have in mind. The latter is what determines THP."
Your latest version imposes reaction functions on the players (rather than finding optimal reaction functions) so once you do that, you're not really talking about a Nash Equilibrium any more, you're just saying "if everyone acted this way, this is what would happen." You're not really saying whether or not it makes sense for them to act that way. This means you can't really do a THP analysis. Such an analysis depends on whether the *strategies* converge to the proposed equilibrium strategies as the probability of error decreases. You have now made the strategies exogenous so this question becomes nonsensical.
This being said, I think I get the point you are trying to make (at least in a limited way, I'm not sure I see the connection to macro) but I think you just need to say "see in this case sbar approaches s* over time and in this other case, it is likely to wander off to one side or the other." My advice is to forget about THP, I don't think it is the droid you are looking for. But if you just say something like that, I suspect it will make more sense to more people (including economists) anyway.
Posted by: Mike Freimuth | September 17, 2015 at 01:26 PM
Mike: "You need to distinguish between a tendency of sbar to drift toward s* or away from s* as the game progresses from the tendency for the equilibrium *strategies* to converge to the proposed equilibrium strategies in the base game as the probabilities for error approach zero. I suspect (though I may be wrong) that the former is more along the lines of what you have in mind. The latter is what determines THP."
I've got that point. I thought I was doing the latter. At least in the case where b < 1, S jumps up or down when Nature makes a move, but then stays constant for all remaining players.
But I think you may be on target in your next paragraph (beginning "Your latest version..."
Point taken.
Posted by: Nick Rowe | September 17, 2015 at 02:12 PM
notsneaky: "Nick, the difficulty with saying "N is large" before solving the game..."
I was trying to simplify, by assuming each player is small compared to the macroeconomy, but I also recognise the problem, that as the equilibrium of the remaining players gets very sensitive to one tremble, even a very small player could affect the average, by affecting everyone else's play.
Posted by: Nick Rowe | September 17, 2015 at 02:19 PM
I think one - better? - way to formulate it would be something like this. Fix N. N-t is the person who moves t periods from last. Every player can make an error.
Examine the game in expectations. In other words, suppose you're asked to forecast person N-t's strategy before the game actually starts. Your best forecast IS in fact s*, for any t. All the errors are unpredictable, mean zero, and uncorrelated so that is your best guess. BUT the variance of the strategies will increase as t -- > 0, at least in the Game B version. So effectively the "standard error" on your forecast s_{N-t}=s* gets larger.
Another way. Before the game starts, ask "what is the probability that [s_{N-t}-s*] will be within +/- delta of zero?". I.e. what is P(-delta < [s_{N-t}-s*] < delta). Then see what happens to that probability as t gets close to N.
[s_{N-t}-s*] is a random variable with mean zero (since the errors are mean zero and uncorrelated) and some variance which depends on N and t, say sigma(N,t). Assume the distribution of e is symmetric. The probability then is 2*F(delta/sigma(N,t))-1. Now let N be "large" and t approach N. The variance sigma(N,t) will increase so F(.) will decrease so the probability that you are still "delta-close" to s* goes to zero. Basically the interval over which you're integrating that pdf gets smaller and smaller the further out you go.
This is not true in the version A of the game. I think...
Posted by: notsneaky | September 17, 2015 at 03:41 PM
Since we have come this far, I will try briefly to make my point about distinguishing between a convergence of s to s* and a convergence of strategies in the perturbed game to strategies in the base game. The key point to notice is that the s played by a player is not their strategy, it is an action. The strategy is a plan of action stating which s they will play in every possible contingency.
So take your base, sequential game (from the bold paragraph and subsequent comments) with no errors. An equilibrium MAY consist of strategies like: play s* for sbar'=s*, play s1 for sbar' > s* and play s2 for sbar' < s* where sbar' is the average s up to that point for every player after the first and then the first player chooses s*. (As I said, I don't think that will be an equilibrium given either of the objective functions proposed in the OP but let's just pretend it is). Now when this game is played, everyone will play s* and sbar at all times will be s*
If you add an error into this game *and everyone plays the same strategies* sbar will shoot off to one side. But the strategies don't change, only the actions change. If these strategies are an equilibrium to the perturbed game then they are a THP equilibrium in the base game even though the path of sbar is likely to be different in the perturbed game, that's not the point. (I believe that IS your point, but it is not the point of THP.)
Posted by: Mike Freimuth | September 17, 2015 at 05:00 PM
notsneaky: I like that. It looks promising.
"Now let N be "large" and t approach N. The variance sigma(N,t) will increase so F(.) will decrease so the probability that you are still "delta-close" to s* goes to zero."
I think that's right. Do we know it's right?
"Basically the interval over which you're integrating that pdf gets smaller and smaller the further out you go."
Is that a re-statement of the previous sentence? Or a conjecture? Or a reason to believe the previous sentence?
"This is not true in the version A of the game. I think..."
I think so too. In game A, the effects of the mean zero independent errors should tend to cancel out as N gets large.
Posted by: Nick Rowe | September 17, 2015 at 05:03 PM
Mike: OK. I think I follow you now.
If we can show that, in the Neo-Fisherite game (Game B), if the players make arbitrarily small independent errors, the outcome will almost certainly be an arbitrarily large distance away from the Neo-Fisherite equilibrium, (for large N), that would be worth showing. It means we can ignore the Neo-Fisherite equilibrium.
Plus, it would maybe be of more general interest in game theory? It would be applicable to any game with an "unstable" (in the old-fashioned sense) equilibrium.
Posted by: Nick Rowe | September 17, 2015 at 05:28 PM
If my algebra is correct then yes we know that. At least for game B. Look at the "MA" term in my post above:
MA_{N-t}=(a_{N-t})*(e_{N-t-1})+(Sum_i{2,N-t-1} of c_{N-t-i}*e_{N-t-1})
This is how previous errors affect person N-t's choice. Square that thing and take expectation to get variance of s_{N-t}-s*. Take derivative with respect to t. I'm pretty sure it works for any N>3 but obviously if there are only few players then the errors won't have many opportunities to get amplified.
The part I'm not as sure about is what happens in Game A but my intuition is same as yours.
Posted by: notsneaky | September 17, 2015 at 05:37 PM
Nick, yes, I think we are basically simpatico at this point. I do think it's possible that there is some interesting broader game theory application here. I've been trying to think of a way to eliminate the original suspect equilibrium in the simultaneous move game (the sequential move game is too complicated to work with for this purpose I think). So far I haven't made a breakthrough. In the back of my mind is a strong sense that we are not the first people to consider this issue and so the concept we are looking for is probably pretty elusive. Still, it's hard not to ponder.
Posted by: Mike Freimuth | September 17, 2015 at 06:02 PM
Mike, mixed strategies? Then make it into an evolutionary game with types surviving or not?
Posted by: notsneaky | September 17, 2015 at 06:10 PM
Mike and notsneaky: we seem to be converging ;)
The concept we are exploring might(?) be called robustness/fragility. For large N: S* is a "robust" equilibrium in Game A; and a "fragile" equilibrium in Game B. Suitable names?
Posted by: Nick Rowe | September 17, 2015 at 09:26 PM
I think this is analogous to the mixed strategy equilibrium in coordination games. For example, suppose two risk-neutral players each choose left or right, and get a payoff of 1 if they match and 0 if they do not match. As a simultaneous move game there are three Nash equilibria: (left,left), (right,right), and (left with p=1/2, left with p=1/2). In the first two equilibria are 'stable' in the sense that neither player would want to change their move if they knew the other was going to deviate by a small amount. The interior mixed strategy equilibrium does not have this property: Each player is only willing to play left with probability 1/2 if they know that the other player will also play left with probability *exactly* 1/2. A tiny bit higher, and the best response is left; a tiny bit lower and the best response is right.
Nick, these types of equilibria have always struck me as implausible as well. It is hard to see how real people would find their way to this knife edge balance and stay there. I have always assumed there must be a well known equilibrium refinement which eliminates them, but I haven't seen it yet. I'm no game theorist though - I hope someone on this thread has definitive answer.
Posted by: Brad | September 17, 2015 at 09:36 PM
Brad: I agree. And I think this is important.
Posted by: Nick Rowe | September 17, 2015 at 09:49 PM
Brad,
OK, I kind of agree, and I feel like I am basically the game theory guy on this thread so I consider this a pretty informed answer but it will be far from definitive. As far as I know, there is no such refinement (that's not definitive but it's fairly well informed). But I'm not sure the equilibrium you want to rule out is any more implausible than the other two. If you did an experiment where you put two people in separate rooms and made them choose left or right and told them the payouts you described, you would probably get about half the people choosing left and half choosing right (and if you got more one way or the other, it would probably either be due to a random sampling error or to some kind of mental or social bias or something but you wouldn't get everyone chooses right or everyone chooses left. This is because there is no way to coordinate on one or the other and that is a feature of the game as it's defined. If there is some way to coordinate, then it is a different game. For instance if you have people declare what they are going to do in one stage and then observe what the other one declares and then choose, that's a different game. Or if two people choose and then two more people observe what the first two chose and then choose and that goes on forever, that's a different game. There are many different variations of this game which are kind of similar but significantly different in their structure that have significantly different equilibria. If you actually had to play that game, neither one of the pure strategies would be any "better" in any sense than the equilibrium mixed strategy.
So if you are using a simplified game to represent a more complex situation, then that's not the equilibrium concept that is flawed, it's the setup of the game. With all of that said, there may be some concept which could be useful in all of this, but we need to be careful about how to define it. The most obvious way to rule out that equilibrium is to just rule out every mixed strategy NE. Every (maybe not every but nearly every) mixed strategy NE has the characteristics that annoy you about that one (that if the other player changed slightly, one player would not want to mix any more). But hopefully I don't need to argue that this is too extreme an approach. (What would you do with matching pennies? And for that matter, you it wouldn't even give you a better grasp of the game you propose for reasons I mention above.) It's a pretty delicate web. It's difficult to go chopping things out without causing problems elsewhere.
Ultimatley I think one of the most difficult concepts to get across to people about game theory is that it doesn't always tell you what will happen. We all know this in theory but you have to constantly remind yourself when you're doing it.
Posted by: Mike Freimuth | September 18, 2015 at 02:57 AM
Mike: true. But if we converted Brad's game into a sequential moves game, I think the 50:50 mixed strategy equilibrium is a lot less plausible than the other two.
Posted by: Nick Rowe | September 18, 2015 at 07:50 AM
Mike,
Thanks for your response. You have a good point about putting people in a room and having them play the one-shot game as written. The problem is that Nash Equilibria in general are not good predictors in this situation - it's only after people have played the game for a dozen or so rounds that they generally converge on something that looks like an equilibrium. My concern is that they are unlikely to converge on an equilibrium like the one above (I have only my intuition to go on here - I'm going to have to do a little digging in the experimental literature). And as Nick points out, that equilibrium still exists in the repeated version of a coordination game.
Still, you have convinced me that part of the problem is using the simultaneous game as a model for more complicated dynamic situations. Unfortunately, people do that all the time (economists and non-economists alike). Perhaps it's because repeated games are so complicated, and often subject to "anything goes" folk theorems. People are trying to get the right intuition by thinking about a simpler simultaneous move game. Unfortunately, I think these types of equilibria might give exactly the wrong intuition (or formal predictions). That's an untested hypothesis, but that is my concern.
It's also not just about mixed strategy vs pure strategy. The mixed strategy in a game of Rock-Paper-Scissors (or whatever you call your local cultural variant) is unique and very compelling. Here the simultaneous game gives a good intuition for what you will find in long-term repeated play.
-Brad
Posted by: Brad | September 18, 2015 at 11:52 AM
One way to reformulate the result above, "theorem-like" would be:
Given a delta > 0 (a neighborhood), and a p, 1 le p > 0, (a required probability), in Game B, there always exists an N and a t < N, such that the probability of player N-t playing (s* +/- delta) is less than p.
In other words, if you require that every player plays "close enough" to s* with at least some given probability, I can always find you a player in a "long enough" game who violates that requirement.
This could be taken as a definition of some notion of "non-robustness" in sequential game with uncertainty, although it depends on N.
Ideally, it'd be better to have a definition of "robustness" rather than "non-robustness" but the converse of "I can always find" is not "I can never find" but "I can sometimes not find". For example, in Game A it may be the case that such a player can be found for some delta but not another (this also mirrors the assumption made in New Keynesian models which assume that the adjustment paths always stay "in a neighborhood" of the steady state no matter what).
Also in Game A, while intuition suggests, as Nick notes, that the errors will cancel out, so the variance of strategies will not explode, this isn't obvious. That's because errors made by different players get weighted differently, with earlier errors being amplified - and weighted - more.
Posted by: notsneaky | September 18, 2015 at 01:00 PM
Nick, if you make it sequential move game, (where they take turns moving) I'm sure that equilibrium will go away. If you make it a repeated game like Brad suggests (where they move at the same time but do it over and over again) it is probably still there but it's also still not obvious to me that it is an implausible equilibrium. It would be interesting to do an experiment like I suggested where people play the game repeatedly. They have to start out by guessing. My suspicion is that most of the time, once they got a match, they would keep playing that pure strategy from then on. If they both played a strategy like "randomize until there is a match and then choose that action until the other guy deviates, then randomize again" it would also be a NE. It's not obvious that people would always do this though. And there isn't a clear (to me) set of criteria for ruling out any of these equilibria. All of the complaints basically come down to some version of "well if the other person changed their strategy, then it wouldn't make sense to do that any more" but that's usually the case with NE. If you want to find a concept, it has to say something very specific about the WAY the optimal strategies react to changes by other players. It's sort of a tricky business.
And yeah, like I said, one thing we all agree on is that NE is not always predictive. We just have to keep reminding ourselves.
Posted by: Mike Freimuth | September 18, 2015 at 02:51 PM
BTW, note that the repeated game has all kinds of unconvincing equilibria. For instance: Both players alternate L,R,L,R no matter what happens. No reason to believe that people *should* behave that way. But if they did, neither one of them could be better off by unilaterally changing.
Posted by: Mike Freimuth | September 18, 2015 at 02:57 PM
That equilibrium doesn't go away in a sequential game.
Posted by: notsneaky | September 18, 2015 at 03:54 PM
the mixed strategy equilibrium does go away. If the second player observes the firs player's move, they will choose the same thing with probability 1.
Posted by: Mike Freimuth | September 18, 2015 at 07:56 PM
Sorry, I thought we were still talking about the s* game.
Posted by: notsneaky | September 18, 2015 at 11:31 PM
Noah: "But some economists would reject that use of the word `unstable'. I am trying to restate that objection in another way."
Stick to "unstable". Equilibrium is either stable or unstable (occasionally neutral). As for those economists who reject this use of the word "stable", they are fools. The only question is whether they can be redeemed (educable) or they are irredeemable, in which case the project at hand is to discredit their competence and get them out of economics.
Posted by: John Morrison | September 19, 2015 at 01:06 AM
John: (You mean Nick, not Noah.)
"Equilibrium" means "what the model says will happen".
"Stable/unstable" traditionally means "will/will not return to equilibrium if hit with a temporary shock".
But a prediction of what will happen if hit with a temporary shock is itself the equilibrium of another model.
So you get into an infinite regress.
For example, whether an equilibrium is "stable/unstable" in the old-fashioned sense may depend on how players update their expectations of the other player's future actions. But then you are implicitly talking about the dynamic equilibrium path of a new model which incorporates assumptions about learning. And whether those assumptions about learning do or do not make sense depends on the information available to the players, etc.
"Robustness/fragility" asks instead "will the equilibrium change by a very large amount if the assumptions change by a very small amount?"
The critique of the traditional sense of "stable/unstable" is a valid critique. But the correct way to respond to that critique is to try to reform the traditional sense, not to ignore the valid core of the traditional sense, nor to ostracise the critics.
Posted by: Nick Rowe | September 19, 2015 at 06:34 AM
@Nick, not sure I follow your last comment. Perhaps I am not familiar with what economists mean these concepts?
Equilibrium is defined in the context of a single dynamic model. Stable/unstable means whether the dynamic model predicts a return or a divergence from the original state when there is a small shock. This is not in the context of a new model, and what will happen is not typically referred to as an equilibrium state -- eg in most systems, you will get a path that describes oscillations around the original state if it was a stable equilibrium state. It is generally considered that unstable equilibria are implausible states in which to find a dynamic system, because the system can exist in that state only if it is very carefully prepared. This need not have anything to do with whether the actual equilibrium state is sensitive to assumptions or not.
For example, consider a simple pendulum made of a rigid rod. It has an unstable equilibrium when the rod is pointing straight up. This is an implausible state in which to find this system, and this is not because it is sensitive to any assumptions. No matter what the mass of the rod or its length, this unstable equilibrium looks pretty similar.
Robustness/fragility the way you are describing them, sound more like asking whether the dynamic system is chaotic or not. This does not have anything to do with plausibility. It is a property of the system. The existence of states which are not only stable equilibria but have the additional property that the system converges to them eventually, no matter where it starts from, can be a very useful feature for predictability. But if the system lacks such states, that does not make it any less plausible as a model, it just makes the behavior of the system more complex (and arguably more interesting to study).
Posted by: nivedita | September 19, 2015 at 04:43 PM
Nick,
I it's in quotes but I missed who you are quoting and I'm not sure if you are agreeing with it or not but just for the record I think "what the model says will happen" is not a good definition of equilibrium. I would say something like "a state in which, if attained, will persist." Of course, the application to a one-shot game probably requires a bit more nuance but that's the general idea. Obviously, when there are multiple equilibria, the model doesn't tell us "what will happen" only the various states in which, if you were there, no force within the model would tend to push you away from that spot. What you seem to want to do is add a force which pushes you away a little and distinguish between an equilibrium for which there is a force in the model which will tend to PUSH YOU BACK and those for which there is a force which will tend to push you farther away. A worthwhile goal but tricky because most NE hang on a knife edge in one way or another. For instance, if you take the matching pennies game, which has only one NE it is still the case that if one players strategy deviates by a tiny amount, the other player's optimal strategy swings all the way to one extreme. Yet, I don't think we want to eliminate this equilibrium.
John,
That is terrifying, I hope it's a comedic bit....
Posted by: Mike Freimuth | September 19, 2015 at 08:57 PM
Mike: I made up that "quote"; but I think it's an accurate reflection of what many economists think, and I lean towards thinking it myself.
" I would say something like "a state in which, if attained, will persist."" But, in a model with a unique equilibrium, doesn't the model say it will pertain and persist? Which means it's the same as "what the model says will happen". And if there are multiple equilibria, what prevents jumps from one to another?
nive: there is a danger in pushing the physics/engineering analogy. Because people have expectations. What happens depends on what they expect to happen.
The inverted pendulum is an unstable equilibrium, and also "fragile", in the sense that a small gust of wind (like a trembling hand) will cause a large deviation.
Posted by: Nick Rowe | September 20, 2015 at 07:13 AM
@Nick, how people form expectations and what they do as a result is what the model is supposed to be modeling, no?
"What the model says will happen" is how the system will evolve from a given initial state. In a model with a unique equilibrium (assuming it's stable, which if it's unique is "likely"), as well as some analog of "friction", as well as the absence of exogenous factors, the system will eventually settle down in the equilibrium state. So you need a few more assumptions before equilibrium is "what the model says will happen".
All unstable equilibria are fragile in that sense -- that's the definition of unstable. But there are other equilibria that are stable, but are still subject to large deviations from small effects, and we do not view them as implausible as a result. For example a spinning top, when it comes to rest will be fallen over on the floor and pointing in some direction. For a real top, the final direction will be very sensitive to exactly how it was set spinning, any imperfections in the floor on which it's spinning etc. But we do not get surprised when we see a top lying on the floor pointing in some particular direction. We would be extremely surprised, however, to see a top in its unstable equilibrium, at rest but standing upright.
Posted by: nivedita | September 20, 2015 at 10:33 AM
OK, here is a simple two person non-zero sum game.
Each player chooses a number from {0, 1, 2}. The payoff for each player is 0, except in four cases. If the player chooses 1 and the other player chooses 0 or 2 the payoff for the player is -1. If one player chooses 0 and the other player chooses 2, the payoff is -2.
Here is my question. What does each player expect the other player to do?
Posted by: Min | September 20, 2015 at 11:28 AM