« The anti-NK model and minimum wages | Main | Fiscal Federalism: A Cross-Border Comparison »


Feed You can follow this conversation by subscribing to the comment feed for this post.

Nick, I'm trying to think of a real world example. Suppose F is NGDP and M is interest rates. The fiscal authorities were told to target NGDP and the monetary authority was told to target interest rates. Then the two realize it would be easier if the fiscal authority targeted interest rates and the monetary authority targeted NGDP. If the monetary authority pegs the price of NGDP futures, then the fiscal authority cannot budge expected NGDP, but can change interest rates.

Is that the sort of example we are working with here?

Scott: yes, exactly that sort of thing. And trying to figure out why it wouldn't be symmetric, so it would actually make a difference if the two authorities swapped targets.

In Engineering terms this is multivariable control problem, with two control loops, and a high degree of interaction between the loops. The classic example is a shower with a hot water knob (variable is h) and a cold water knob (c), and you want to control both the temperature and the flow rate (T to T* and F to F*). In response to someone else flushing the toilet, the flow of cold water drops a lot, and the flow of hot water increases a little (say from pressure effects in the pipes). Suddenly your shower is too hot, and has too low of flow. So you turn up the cold water tap until you get the temperature right, but now you have too much water flow, so you have to turn down both taps a bit, until the flow is correct. (That is a change h of 1 results in a change of T of 1 and a change of F of 1, a change of c of 1 results in a change of T of -1 and a change of F of 1, or T=h-c, F=h+c, at the operating point of T*, F*)
In engineering controls we would construct two synthetic variables so you have two new variables (a, b, with starting points of a* and b*) so that each of your measured variables only is influenced by one of the new variables (T=a, F = b). I this case you set c = (b-a)/2 and h = (b+a)/2.
Now in your shower problem, you have T-T* = 4 (temp is too hot) and F-F*=-2 (flow is a little too low), so you set a-a* = -4 and b-b*= 2. c-c*=(b-a)/2 = (2- (-4))/2 = (6/2 = 3 and h-h* = (2+(-4))/2 = (-2)/2 = -1, that is you turn up the cold by 3 units, and the hot down by one unit, so temperature goes down by 4 units (3 from increasing cold, 1 by decreasing hot) and your flow goes up by two units (3 up by increasing cold, 1 down by decreasing hot) and you are back to your setpoints.

This works well if you know the gains (or elastisities reasonably well) and the same system controls both outputs (c and h) and has responsibility for both measured variables (T and F in this case). You also have to control both synthetic variables simultaneously for it to work.

I was thinking about this in a very simple model.

Say the 2 instruments are are:

1. Adjusting the size of budget deficit by changing the tax rate
2. Adjusting the money supply via asset swaps for money

and the 2 targets are :

a. Run a balanced budget.
b. Hit an NGDP target.

Assume policy works immediately and no shocks.

Assigning a. to F and b to M would make this very easy.

But if F was used to hit the NGDP and M to maintain the budget deficit would it matter ?

In this simple model I don't think think so. Every time G reduced taxes to increase NGDP then M would have to increase the money supply to boost income back to the level where T balances the budget, in tern causing am increase in T to prevent an NGDP overshoot. I think we end up with T, G and M the same as if M had the NGDP target.

Obviously the more shocks and lags you introduce and the more complex you make the targets (sustainable rather than balanced budget, inflation or interest rate target rather than NGDP) the more complex it becomes.

Nick, is E(X) the expected value of X? If not, what is E(.)? So you're trying to minimize the mean squared errors?

This post is great because you don't have to know hardly any econ jargon to understand it.

The fact that one minister doesn't know Um and the other doesn't know Uf is strange. Is that related to some aspect of the real world? Also, I guess it doesn't matter if they don't affect each other's error signal. If I were writing an algorithm to try to solve this, I'd want to have all the info including the two errors M*-M and F*-M: I'd create one bigger problem with input x=[m f V' Um' Uf']' and output z=[(M*-M) (F*-F)]'. A simple model would be

z = A*x + b

Then I'd try to estimate the unknown elements of A and b by a least squares fit to collected data (and by perturbing m and f appropriately). Some elements of A I know are 0 a-priori. Then I'd pick m and f so as to minimize z'*z, forever refining my estimates of the elements of A with each iteration.

Of course that's a very simple model!

I guess each row of the above problem can be done in isolation (since the elements of A multiplying Uf on the top row are 0 and the elements multiplying Um on the bottom row are 0).

I don't get your point 3. Can you elaborate?

Jesse and I are on the same page here. My z above is a 2x1 vector in case that wasn't clear. And he's correct, you do need to essentially set both m and f at the same time, or at least set them with knowledge of what the other will be. Because after you have estimates for A and b, you can divide A as

A = [A1 A2]

where A1 is a 2x2 matrix and likewise x = [x1' x2']' where x1 is 2x1 and of course x1=[m f]'

Then the solution is:

x1 = inverse(A1)*(z - b - A2*x2) where x2 is known: x2 = [V' Um' Uf']'

and inverse(A1) will not be diagonal in general (since A1 won't either)

This simple model could be expanded to include higher order terms such as m^2, f^2, m*f, etc.

So just to be clear, when I wrote this in my 1st comment:

"I guess each row of the above problem can be done in isolation (since the elements of A multiplying Uf on the top row are 0 and the elements multiplying Um on the bottom row are 0)."

That's not necessarily true because that doesn't say anything about the off-diagonal elements of A1: only if they are zero can we solve the top and then the bottom w/o regard to the other: i.e. only if the problems were uncoupled, but they explicitly are coupled here.


Suppose you and I secretly swap blogging positions, pretending to be each other. I start trying to achieve optimal Canadian monetary policy, and you do the same for the Fed.

What is the new equilibrium? Keep in mind that if I start having "Nick Rowe" blog outrageous stuff, you can punish me in the following round. Each day there is a 99% we continue.

Jesse: good example.

Tom: "Nick, is E(X) the expected value of X?..... So you're trying to minimize the mean squared errors?"

Yes to both.

"I don't get your point 3. Can you elaborate?"

No. I don't really get it either.

Bob: but as good ministers of the crown, we both try to hit our assigned targets. No republican thoughts allowed in this exercise!

Shoot, I still don't have it quite right: I need to express everything in terms of deltas away from a nominal. Say for simplicity, that I never change my estimates of A and b once I have enough data to do so. And then say at time t[n] I have an x1[n] I'm happy with, and I know that at t[n+1] that x2[n] will be changing to x2[n+1]. Then based only on that I can set x1[n+1] = x1[n] + dx1[n+1] where dx1[n+1] = -inverse(A1)*(A2*(x2[n+1]-x2[n])). Then when I get the error, z[n+1], at t[n+1], I make a further adjustment to x1[n+1] by -inverse(A1)*z[n+1]. I'd actually want to weight those two adjustments to reflect my growing confidence in my prediction, and a shrinking emphasis on the new error z[n+1]. The weights should reach a steady state value, like in a Kalman filter. Also, I should have mentioned that my simple model should contain a noise term, v:

z = A*x + b + v

where v has some covariance R (which I could take to be the identity, I, if I don't know). Now adding in a continuously refined estimate of A and b adds more complication. That estimation could be weighted such that very old data is less important, just in case A and b are not constants, but instead are slowly changing.

Would this work? probably not... my model is probably too simple, but might as well start off with a simple linear model first.

I couldn't quit till I make it work: http://brown-blog-5.blogspot.com/p/extra.html

Example of above if you click my name.

What if we changed the problem such that the minister that has control of instrument f is instead an elected body. Suppose the remaining minister offers to swap targets but the elected body totally ignores the offer because they really like obsessing over F.

Furthermore suppose instrument m has two modes, a conventional one, and an unconventional one which can be used under unusual circumstances. And further suppose that because the minister made some very serious mistakes in previous years the situation becomes such that both targets M* and F* become much harder to hit. The only way the minister can hit his target is by switching m to unconventional mode. But since he is inexperienced with it, he complains to the King that it has "unintended side effects". (He never spells out what these are.) Furthermore, the elected body have become so enthusiatic in pursuing target F* that this makes the minister's job of hitting M* even harder. He complains in his semiannual reports to the elected body about how they are making his job more difficult, but they accuse him of trying to debase M and threaten to audit his ministry.

On second thought maybe I am making this too hypothetical. Nothing like that ever happens in real life, does it?

P.S. I also enjoyed J.W. Mason's post and all of the related links/comments.


I nominate this for the 2014 "Comment of the year award"

Extra credit: clean up my blind search alg (link) to remove non-opt parts and make it more robust to dynamics & non-linearities. Nick I've always wondered if I could turn one of your posts into functioning code: thanks, this was fun. I don't know if I learned anything you had in mind though :(

O/T: Glasner got back to you (and JP had a question there too).


I'm glad to see you taking this question up. However, I'm not convinced by your answer. The assumption that the ministers have private information seems hard to justify. If both are trying to maximize the same social welfare function, there is no reason for them not to communicate their private information to each other. (And no reason not to trust their communication.) And if they are not seeking the same objective, then thou don't need private information to make the choice of targets matter.

It seems to me that in practice arguments for the current assignment often come down to a belief that the central bank is maximizing the social welfare function but the elected government is pursuing some other objective.

Another promising direction might be to consider (1) the timing of news about shocks affecting each target; (2) the lag before an adjustment in each instrument affects each target; and the loss function for deviations from the targets. In general, you would want to minimize expected deviations from the targets, as weighted by the loss function.

It seems like it would be straightforward to make an argument for the current assignment along these lines. The output gap is affected by unpredictable shocks, while the budget position is fairly predictable. And deviations form the output gap are presumably more costly than deviations from a stable debt ratio. So it makes sense to target the faster-acting instrument to output. If we think that -- taking into account the time it takes for a change in policy to be decided on, as well as the time it takes to go into effect -- the faster instrument is monetary policy, then that's what should target output.

I suspect, though, that if you did the math, you'd find the optimal policy is for both instruments to target both targets. Which seems to be what happens in practice.

JW: Glad you found this post.

I'm not convinced by my answer (4b) either.

But what about my point (2)? That seems right, doesn't it? Under certainty (ignoring the degenerate case (1)), we could swap monetary and fiscal targets and it wouldn't matter. AFAIK, I think that point is new, right? I have read a little bit in the old targets and instruments stuff, but I don't remember that point coming up. It was all about number of instruments must equal number of targets. And it does seem to pull the carpet out from under lots of arguments about "monetary policy should target M* and fiscal policy should target F*"

I think your point about there being a longer decision lag for fiscal policy is legit. Friedman was always keen on lags, of course.

I'm still thinking about this one. There has to be something more fundamental that determines the assignment of targets to instruments. I keep coming back to the idea of who moves last, and the fact that Cournot and Bertrand equilibria are generally not the same.

Or, maybe it's no accident that the fiscal authority is the principal, and the monetary authority the agent. It is (in some way) possible to delegate monetary policy in a way that it is not possible to delegate fiscal policy.

Dunno. But I think it's right to set it up as a principal-agent game-theoretic question.

what about my point (2)? That seems right, doesn't it? Under certainty (ignoring the degenerate case (1)), we could swap monetary and fiscal targets and it wouldn't matter. AFAIK, I think that point is new, right?

Well, that was (supposed to be) the main point of my post. And I hope it's new, since I am trying to develop it into an article.

There has to be something more fundamental that determines the assignment of targets to instruments.

I really think it's about politics, not about economics. I think it's a fear that elected governments cannot be trusted with the authority to adjust spending and taxes in response to the output gap. That is the main takeaway from this line of reasoning, from my point of view -- that in principle, the two assignments are equivalent, so the strong preference for the current assignment reflects something outside of our usual framework for analyzing policy.

Again, I don't think the question of who moves first arises unless the two policy-setters are pursuing different objectives. And that brings politics in.

The lags story is the only non-political alternative I can think of.

If we go back to Friedman 1962, the case for an independent central bank begins with “the very appealing idea that it is essential to prevent monetary policy from being a day-to-day plaything at the mercy of every whim of the current political authorities.”

If you did want to focus on who moves last, the time-inconsistency literature would be the start, I guess. Doesn't Rogoff's “The optimal degree of commitment to a monetary target” make an argument sort of along those lines?

JW: Yep. From your post: "Functional finance and sound finance agree that the economy should be at a point like b. If policy were executed perfectly, the economy would always be at such a point, and there would be no way of knowing which rule was being followed. Since both target should always be at their chosen levels, it would make no difference -- and be impossible to tell -- which instrument was assigned to which target. The difference between the positions only becomes apparent when policy is not executed perfectly, and the economy departs from a position of full employment with sustainable public debt."

AFAIK it's new. But I think your point generalises to any pair of targets, and any model of the economy (outside the degenerate case 1). I think you should do your article about that general case. Because it is not related just to full employment and sustainability of debt.

"I really think it's about politics, not about economics."

Maybe, but if so, we could also say that central banks cannot be trusted to do anything except target inflation, or NGDP, or the price of gold, or the exchange rate, because these are observable targets, and they can be held accountable for hitting them. Whereas nobody really can say what "full employment" or "doing what's best for the economy" really means.

The time-consistency problem is closely related to the "who moves first" question. By promising, if you are bound by your promise, you can switch from moving last to moving first. I promise, then you move, then I do what I promised. You might be onto something there.

Nick, I have no idea what M(.) and F(.) are like, but I think the conclusion of Jesse and this statement from JW Mason:
"I suspect, though, that if you did the math, you'd find the optimal policy is for both instruments to target both targets."
amount to the same thing. It's easy to construct an example which never converges if m* and f* are determined independently. For instance:

M = m + f
F = -m + f
and M* = F* = 0, and initially m0 = f0 = 1

It's easy to see that the answer is m* = f* = 0, but if we solve for first m* and then f* independently (and then back to m* and back to f*, etc), we'll never get there (never get close!), no matter how many times we iterate. What am I missing? Is it known that the real M(.) and F(.) are not like this? (i.e. not tightly coupled)

Nick, O/T: I ask Sumner a simple question here, and I wonder if you wouldn't mind answering too:

More generally, if iteratively solving for m* and f* independently is ever going to converge, then I think it is necessary that |(dM/df)*(dF/dm)| < |(dM/dm)*(dF/df)| where dX/dy is the partial of X wrt y. For example M=2*m+f,F=m+2f will converge while M=m+2*f, F=2*m+f will not, independent of the choice of {M*,F*}. I add a scilab script to verify (at the bottom after clicking my name).

Tom: If there's no uncertainty, so both know both M(.) and F(.), they solve for the solution, and implement it. Suppose that fiscal moves first (Stackelberg). Fiscal knows that monetary will set m=-f, to ensure that M*=0, so fiscal knows that he must set f=0 to ensure that F*=0. And the answer would be the same if they swap targets, or swap who moves first, or if they both move at the same time.

If there is uncertainty, so they don't both know everything, then it will depend on who knows what. For example, we could have M=m+f+U1 and F=-m+f+U2, where U1 and U2 are unknown random variables. But if U1 and U2 are mean zero uncorrelated, and known to neither, it won't change the results, except to add random noise. They set m=0 and f=0.

Nick, yes, the way you describe it there in your 1st paragraph: you are doing exactly what Jesse outlined. Then yes, the timing doesn't matter on when you implement the solution: you solve 2 equations for 2 unknowns simultaneously to find {m*,f*} in one solution. Then you can implement either one ahead of the other or both at the same time if you like: after both are implemented, M=M* and F=F*.

I agree with your 2nd paragraph as well. It's only when you try to solve each separately (assuming the other won't move, for example) that you run into problems. And then the order matters: the wrong order and you won't converge. Even if you have the right order it might be slow going (you'll converge slowly).

When you bring up "it will depend on who knows what" ... under what circumstances would they not share any information they had about any Vs or Us with each other? Is there something in particular you're thinking of in that case?

BTW, I had an O/T link to a simple question I asked Sumner that I was interested in your answer on to, but it's in spam I think.

It's somewhat contrary to the spirit of Nick's models, but I've just been reading Herbert Simon's 1978 Nobel speech and it seems to me another way if motivating this discussion is to say that because of limits on the information-processing and decision making capabilities of the authorities (and of the public) there may be practical reasons to look for simple rules with one target per instrument that use only current data. Among other things - as Nick says - a simple rule based on observables is easier for the public to monitor, which is presumably important to create the right incentives for the policy maker. If you look at why central bankers say they follow a rule, it tends to be for reasons like this. So then the questions Tom Brown raises come into play.

I have to admit, I hadn't thought of convergence and divergence in this context until now, is just been thinking about the accumulated deviations from the target.


That's interesting. If you're right, and if we restrict ourselves to rules where each instrument is assigned to one target and uses only its current value, then there will be a clear basis for preferring one assignment over the other.

Another wrinkle. If we think of the instruments concretely as the budget balance and the interest rate, and the targets as the output gap and the growth of the debt ratio, then dF/dm will depend on the current debt ratio, going from zero at zero to infinity as the debt ratio rises. If we think of f as the primary balance, then dM/dm will also change with the debt ratio, eventually flipping sign at a high enough ratio, as the expansionary effects of higher interest income for holders of public debt outweigh the contractionary effects of higher rates for private borrowers.


Is there a standard reference on this kind of problem?

JW Mason,

Actually I wasn't quite clear about the order mattering (but I think you figured it out). I'll get to that in a second: but my test on the partial derivatives holds. I used "partial derivative" to make it more general: so it could apply to cases when M() and F() are not linear functions, but can be approximated by linear functions near the desired solution.

Back to my order statement: if my test says the problem is bad for solving one at a time, for instance solving the M equation in terms of m, and then F in terms of f, you won't help yourself by solving F in terms of f first, and then M in terms of m, but you will help yourself by solving M in terms of f and then F in terms of m or alternatively F in terms of m followed by M in terms of f. In all cases I'm assuming the second solution sees the results of the 1st, and the 3rd sees the results of the 2nd, etc. It's just that when solving for either m or f you assume that the other variable has stopped changing. I guess yet another possibility is both equations are solved simultaneously, assuming the other variable doesn't change... I didn't think about that, but I can't imagine that's a good idea either.

In any case, there's no beating just solving two equations in terms of two unknowns: it gets you there in just one step every time (as both Nick and Jesse demonstrated).

Also, when my inequality test becomes an equality then it doesn't matter what you do one at a time or how you do the assignment: you'll never get there: you have to solve for both simultaneously. My simple example is one of those.


Right on all counts. Personally I'm not worried about the order -- that's Nick's idea -- but I am worried about the assignment. And what seems clear is that if you are constrained for some reason to solutions where each instrent responds only to the current (or past) values of one target, then one set of instrument-target pairs will converge and the other will diverge. I hadn't realized that.

Now a next step would be to assign explicit loss functions for departures of the targets from their preferred values, and explicit adjustment costs for the instruments...

JW, in case Jesse doesn't get back to you, I'd never heard of the "synthetic" variables that he describes, but any standard linear algebra text should describe "diagonalizing" a system of equations, which is essentially what he's doing (in addition to solving the problem). Not all systems can be diagonalized though, for example

M = m + f
F = f

But that doesn't stop us from solving it of course.

Tom, sure, the algebra is no problem. In fact is already written down the solutions for the fiscal-monetary case before reading this post. What I'm interested in is how engineers talk about this stuff.

JW: remember in the olden days, when economists would draw two reaction functions, where the Nash equilibrium is where the two reaction functions cross, and we would draw cobwebs around the Nash equilibrium, and ask whether it would or would not converge, if the two players took turns to move? And if it circled clockwise or counterclockwise you would get different results? That all disappeared with Rational expectations. And came back with adaptive learning? It sounds a lot like that.

Interesting discussion.

Yes, exactly. (Except that at UMass-Amherst the old days never went away.)

And yes, very interesting. This is great, really clarifying, thanks for the post.

We might put weights on the two loss functions, and ask under what circumstances you would put Wm=1 and Wf=0 for one player, and the opposite for the other player?

Too much math for me!

JW, well I'm an engineer, but I must have missed the day in class about synthetic variables... or it's just gone from my memory. Of course I'm an electrical/control/software guy... maybe that's a civil engineer thing. Most likely I just forgot about it :(

... but I do do linear algebra problems frequently, and it never really comes up for me.

When I was in grad school (studying feedback control systems actually), they just mixed all the disciplines together: with controls it didn't matter what you were trying to control too much: amplifiers, aircraft or chemical plants. Also, in several of our classes they talked about how it was the economists that came up with various optimization schemes that we were using.

I believe that was the case with a theorem about optimal paths, for example (getting a little fuzzy now). Also linear programming problems (optimization using a tableau). Here's a interesting sounding read (mixing the two disciplines):

Nick, thanks for answering my OT at Sumner's. I added a follow question there.

So one thing that is already clear -- there is some threshold level of the debt ratio, call it D. Above D, any assignment that where the interest rate rule puts zero weight on stabilizing the debt ratio is guaranteed to diverge. This is interesting to me because it has straightforward policy and historical implications. Managing the market for government debt has always been an important function of central banks when debt was very high, like post Napoleonic war Uk and post WWII US (and in wartime in many countries.) And going forward, we have the argument for financial repression as unavoidable in countries where debt ratios have risen too high, like here: http://www.cepr.org/pubs/dps/DP9750

Nick, just testing to see if I can post yet.

JW, Nick: you’ve lost me now, but if any of us peons deserve a footnote somewhere, you’ll let us know, right? Haha :D
(BTW, I can show you how I derived that inequality if you like: it’s pretty straight forward)


Yes please.

JW, say we have this:

Y = A*X

Y and X are 2x1 and A is 2x2 with elements aij and X with elements xi and Y with yi. Solving 1 at a time (we'll start w/ the first row). I'll add a time subscript to the elements of X

x1_1 = (y1 - a12*x2_0)/a11 = y1/a11 - x2_0*a12/a11

Then we solve for x2

x2_1 = (y2 - a21*x1_1)/a22

Now back to x1_2

x1_2 = y1/a11 - y2/(a11*a22) + x1_1*a21*a12/(a11*a22) = K + G*x1_1
K = y1/a11 - y2/(a11*a22)
G = a21*a12/(a11*a22)

x1_(1+n) = K*G^0 + K*G^1 + K*G^2 + ... + K*G^(n-1) + (G^n)*x1_1

You can see right away this only converges if |G| < 1

A similar statement can be made for x2_(1+n). It's also straight forward to prove that if |G| < 1, then indeed these sequences converge to:

X_infinity = inverse(A)*Y

Write out the equation for x1_infinity... it's the same as for x1_(1+n) but never terminates. Then K + x1_infinity*G = x1_infinity which implies K/(1-G) = x1_infinity. Write out K/(1-G) ... I think you'll find it's the same as the top row for inverse(A)*Y

Now do the same for x2_infinity. QED

... BTW, there may be an error in there somewhere but that's the basic idea. Try expanding all the steps and see if I'm a liar.
You can change the order and start w/ the bottom row, but it doesn't change anything, but if you solve the bottom row for x1 first and then the top row for x2, etc, then the resulting G will be the reciprocal of what it is here.

JW, rather than just a test for convergence, you can use |G| as a description of the rate of convergence (or if > 1, the rate of divergence!)


Thanks. I am going to work through this this weekend.

All right, just for laughs, lets move the ball a little more down field: so what is the top row of inverse(A)*Y?

inverse(A) = [a22 -a12; -a21 a11]/(a11*a22 - a12*a21)

and thus the top row is:

(a22*y1 - a12*y2)/(a11*a22 - a12*a21)

I see my 1st mistake from the above comment, let's correct it now:

K = y1/a11 - y2*a12/(a11*a22)

Now, expand K/(1-G) = (y1/a11 - y2*a12/(a11*a22))/(1 - a12*a21/(a11*a22)) = (y1*a22 - y2*a12)/(a11*a22 - a12*a21)

Check! We just verified that [1 0]*inverse(A)*Y = K/(1-G)

Now to find a similar K and G for the 2nd row (call them K2 and G2) and verify that

[0 1]*inverse(A)*Y = K2/(1-G2)

Then we'll have proved that the iterations will converge to the correct answer, if they do indeed converge. That part I'll leave as an exercise. :D


I ended up going back to the textbook for this one. Your answer is not quite right. What you want to do, in a case like this, is set up the Jacobian matrix and apply the Jury test. For a two dimensional system of linear difference equations, that requires:

1 - trace + determinant > 0
determinant < 1
1 + trace + determinant > 0

A colleague and I worked applied this to the interest rate, fiscal balance, output and debt system, and the results were surprising. Under a quite general set of assumptions, the "functional finance" assignment with the interest rate targeting the debt ratio and the fiscal balance targeting output always converges. But the standard assignment with the interest rate targeting output and the fiscal balance targeting the debt ratio only converges for debt ratios below some critical value. Above that value -- which could be on the order of 100% of GDP for plausible parameter values -- you get divergence. So when the debt-GDP ratio is low, the two assignments are equivalent. But when it rises above some critical level, they are not -- stability requires that the interest rate be assigned to the debt ratio.

These were stronger results than I expected, and I'm still checking to make sure they are right.

JW, maybe we're talking about different problems. What is the Jacobian in the case you're talking about? Is it this:

Jacobian = A = [dM/dm dM/df; dF/dm dF/dm] where ";" separates the two rows of the 2x2 matrix? If that's the case, then the Jury test just ensures that the largest eigenvalue of the 2x2 Jacobian matrix A is within the unit circle, which is the normal test for stability for a discrete time feedback system that looks like this:

x[n+1] = A*x[n]

If both eigenvalues are within the unit circle this feedback system will converge to zero (i.e. the zero vector: [0 0]') with infinite iterations, no matter what the initial x is (x[0]). If both eigenvalues are on the unit circle, it's a unitary operator (length preserving), no matter the initial x. If one's inside and one's outside, what it does depends on the initial x. If both are outside, then any initial x which is not the zero vector will eventually make it blow up.

However the system I was referring to didn't use A (the Jacobian) in this way. Since I solved first one row, and then the other, using the results of one to feed into the the other, I essentially broke the problem down into two 1x1 systems:

x1[n+1] = K1 + G1*x1[n]
x2[n+1] = K2 + G2*x2[n]

Where Ki and Gi are constructed from the elements of the original Jacobian (and the original desired result Y in the cases of the Kis). You don't need to do a 2x2 Jury test on that, you just have to be sure that |G1| < 1 and |G2| < 1 (essentially two 1x1 Jury tests). Now it turns out that G1 = G2 ... let's call that g, so you can just do one test: |g| < 1. It's also true we can make a new 2x2 discrete time system out of that, but it'll be an uncoupled system (i.e. the off diagonals will be zero):

x[n+1] = G*x[n] + K, where G = [g 0; 0 g], and K = [K1 K2]';

Doing your 2x2 Jury test (test for eigenvalues inside the unit circle) on 2x2 diagonal matrix G will give you

1 - 2*g + g^2 > 0
g^2 < 1
1 + 2*g + g^2 > 0

Well the 1st and 3rd lines can be factored as (g - 1)^2 and (g + 1)^2 resp., which are squares, so it's always true they're > 0 (provided g isn't +/- 1) which just leaves g^2 < 1 which is the same as requiring |g| < 1, assuming the Jacobian (A) is all real valued in the first place. That was my original test.

So anyway, that's a long aside, but my point is that iteratively calculating

x[n+1] = A*x[n] equation 1

with A the Jacobian is a lot different than iteratively calculating

x[n+1] = G*x[n] + K equation 2

Which is what I thought you were trying to do.

Please let me know how it's going... I'm interested to know if I have the correct Jacobian here, and if so why you want to calculate equation 1 iteratively rather than equation 2. Recall, that I originally set up the problem like this:

Y = A*x

So it's the elements of both Jacobian A and desired result Y that go into K, but only elements of A that go into g (and thus G).

Also, you can try what I coded up here (at the very bottom... scroll past all the junk at the top):
You can try it out in Scilab for youself. I'm pretty sure it will always work for you. (Scilab is a free download). The solutions always iteratively converge to x = inverse(A)*Y if it passes my convergence test, and they don't if it doesn't.


I shouldn't have said your solution was not right. Sorry. I was wrong for two reasons. First, you are right, the problem as I've set it up is slightly different from the way you did. I have both variables adjusting simultaneously on the basis if the previous period's values, as opposed to one and then the other. Exactly as you say. Second, for reasonable parameter values (at least, if each instrument moves it's own target in the right direction without overshooting) the first and third conditions of the Jury test will am

Hi JW, your last sentence got cut off there. But let me construct a new feedback system where both x1 and x2 are solved for simultaneously assuming the other will stay fixed. In that case we have
x1[n+1] = (y1 - a12*x2[n])/a11
x2[n+1] = (y2 - a21*x1[n])/a22
Which we can rewrite as X[n+1] = B*X[n] + C
Where B = [0 -a12/a11; -a21/a22 0], C = [y1/a11 y2/a22]'
trace(B) = 0
determinant(B) = a12*a21/(a11*a22) = d
1 - trace + determinant > 0 implies d > -1
1 + trace + determinant > 0 implies d > -1
d < 1
So again, the condition for stability is the same since d = g = a12*a21/(a11*a22). If there is a steady state solution it's X = inverse(I-B)*C
If you work that out, you'll see it's the correct answer (same as above), i.e. inverse(A)*Y

Am I getting warmer, or still not matching what you're after?

... and I think if you start with this instead:
x2[n+1] = (y1 - a22*x1[n])/a12
x1[n+1] = (y2 - a11*x2[n])/a21
Then your condition becomes |a11*a22/(a12*a21)| < 1
To see that's true just match up the coefficients with the above. It doesn't matter what Y is for stability. Now swap y1 w/ y2 to see the steady state solution is the same too.

Sorry, commenting glitch and then I had to run to class. What I was going to say was that the first and third conditions will always be satisfied, so everything comes down to the determinant being less than one. Which is just another way of describing the condition you gave originally. So yes, exactly what I was after.

JW, are you sure the 1st and 3rd don't contribute? What I have above is that the 1st and 3rd are redundant, but they put the lower bound on the determinant, and the 2nd provides the upper bound:

-1 < d < 1

... and what this all says is it doesn't really matter whether we solve first for x1 using row 1, assume that x1 is now fixed and solve for x2 using row 2, etc. Or if we solve for x2 using row 2, then assume x2 is fixed and solve for x1 using row 1, etc. Or if we solve for x1 using row 1 assuming x2 is fixed and simultaneously solve for x2 using row 2 assuming x1 is fixed (your case), etc. All three are almost exactly the same, have the same test for convergence, and converge at the same rate.

What matters is the assignment, like you started off stating: i.e. solving x1 using row 1 or row 2, etc. And then the test and convergence rate in one case just become reciprocals in the other assignment case.

The don't contribute if you set up the problem the way I think you should. Writing b for the budget balance, i for the interest rate, Y for output and D for the change or level of debt, then if dY/db < 0, dD/db < 0, dY/di < 0, and dD/di > 0 (all of which we have good reason to believe), then the determinant is always positive. So we don't need the lower bound condition.

Ah, OK. Well great... thanks for a fun conversation.

The comments to this entry are closed.

Search this site

  • Google

Blog powered by Typepad