« Pump-priming and the Keynesian Laffer curve | Main | Employment in the current and in previous recessions »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

That certainly doesn't sound right to me. If ND is no disaster, EB is effective bailout and IB is an ineffective bailout, then the posterior odds of EB given ND vs IB given ND is

[ p(ND|EB)p(EB) ] / [ p(ND|IB)p(IB) ]

where p(EB)/p(IB) is the prior odds that the bailout is effective. The only way the non-arrival of a disaster would revise this ratio downwards - so that the posterior odds are less than the prior odds - is if

p(ND|EB) < p(ND|IB)

or, equivalently,

p(D|EB) > p(D|IB)

that is, if the probability of a disaster occurring after an effective bailout is *higher* than the probability of a disaster following an ineffective bailout.

If you define 'effective' as meaning 'reducing the chances of disaster', then the claim doesn't satisfy Bayes' rule for updating beliefs.

Nick,

I'm no probability expert, but on a quick read, I’d have to say I agree with Caplan, on the basis that it looks to me like he did not in fact define disaster as “anything” north of 8 per cent unemployment (my emphasis).

I think he defined disaster as “a point somewhere” north of 8 per cent, although not specifically designated, but north in the sense of greater but not equal (my emphasis).

He then defines the state of the hypothetical outcome as non-disaster.

He then admits the possibility non-disaster outcomes may range from south of 8 per cent to north of 8 per cent.

Then it seems to me intuitively that the closer the non-disaster outcome is to the counterfactual disaster outcome, the more likely it is that the bailout prevented it.

JKH: I would like to believe your interpretation, and it makes some intuitive sense. But I have re-read his statement, and he seems to define only two states of the world: below 8% (no disaster); and at or above 8% (disaster). Everybody makes mistakes, and this might be one.

Stephen: That's a nice clear and simple proof. But I'm not sure that "the bailout is effective" means the same as "the bailout prevented disaster". To my mind, saying "the bailout prevented disaster" is like saying "the medicine is effective AND the patient had the disease". If you observe the patient surviving, you might conclude that the medicine was effective, or you might conclude the patient never had the disease. That's where I get muddled.

Nick,

He says of his own view:

"Alternately, if the bail-out happens, and unemployment hits 8% or higher during the next two years, I'm going to become more confident that the bail-out prevented disaster."

If unemployment is 8 per cent or higher, and as a result he's (more) confident that disaster has been prevented, how at the same time could he have defined disaster as 8 per cent or higher?

I think all you need to make this work is the notion that good states are more likely to occur if the bailout is effective than if the bailout is not effective.

Of course, to do all this properly, you'd have to have the counterfactual of what would happen with no bailout. But it's hard to see how a worsening situation would lead you to revise upwards your subjective probability that the bailout was effective.

JKH: OK, I see your point now (I was slow). So implicitly he has three states, and doesn't define the level of unemployment which constitutes disaster (but we know that 8.001% would not be disaster). Suppose it's >10%. Then he is saying that for u<8%, and u>10% his posterior confidence that the bailout prevented disaster falls, but for 8%

Stephen: yes, that's what I was thinking.

We're talking about real life, the intersection of politics, ideology, greed, etc. There are no coins. There are no repeated experiments. These are not actually random variables. Therefore, the application of Bayesian theory is inherently flawed. Applying any kind of stochastic analysis to economic outcomes makes as much sense as betting on professional wrestling or the outcome of a Harlem Globetrotters game (hint, don't bet on the Washington Generals). An assumption of Caplan's challenge is that some quantified objective measure can be used to determine the outcome of the experiment (i.e. did the coin land heads or tails?). He suggests the unemployment rate. Which is released by people firmly immersed in the world of politics, ideology, greed, etc. and who have a strong vested interest in people believing that the outcome of their actions are beneficial.


You should also add in the possibility that there would not have been a disaster (BTW 8+% unemployment wouldn't be categorized as a disaster by me, perhaps "a recession") but that the bailout CAUSES a disaster.

The real problem here was that Caplan’s articulation of both the problem and the solution were not quite so elegant as the Bayesian probability logic he was invoking.

Otherwise - Bayesians rock.

:)

In order to analyze Caplan's scenario, you have to extend your Bayesian analysis to several periods. Caplan's answer would suggest that the PDF is not stable - that results are path dependent. Since employment lags economic shocks and the effects of fiscal stimulus (bailout) lag several quarters, we cannot determine the PDF of "effectiveness" until we know the path. From Caplan's analysis, he would appear to be stating that a disaster has fat tails. ie. it is unlikely that unemployment would be contained to 8% with a full fledged banking crisis. It is unlikely that unemployment would exceed 8% if a full fledged banking crisis did not occur. The paradox is in the assumed PDF in the Baysian analysis. Or better still, favoring Bayes over Markov.

clr: it sounds like you're suggesting that the random variables are not stationary, which makes perfect sense. Just remember, this is a world where nobel prize winning economists construct models using gaussian distributions to model phenomenon better suited to fat tail distributions, because it makes the math easier. And the result is a torpedo into the good ship SS. World Economy. From the POV of someone who uses probability theory to make things that actually work (information theory as applied to wireless communications), the way in which probability theory is used in economics is a bit underwhelming.

I don't have much time to work on this, check it, and type it in, but what I have is probably still worth writing, apologies for any errors.

I have done some Bayesian work, but I haven't looked at it in a while. Here are my thoughts:

But what does "the probability that a bailout prevented a disaster" mean? Is it P(-D/B)-P(-D/-B)?

You need some additional information on causality to really answer this, including theory, or at least informal, but still solid, logic chains. What I would just say from the givens you're providing is "Not having a disaster is [P(-D/B)/P(-D/-B)] times as likely (ex-ante) due to doing the bailout."

Ok, what about ex-post. This whole thing depends greatly on how you define things, so rigor here is important (but time consuming!). I hand wrote this out on paper, and don't have time to type it up in detail, but quickly:

Posterior = update factor x prior belief

In this case;

P[Bailout is a strong positive factor | No disaster occurred AND Bailout tried (and other prior info)]

=

P[No disaster occurred AND Bailout tried | Bailout is a strong positive factor]/P[No disaster occurred AND Bailout tried | Prior info from before you observe whether there is a disaster or not, and this prior information includes prior information or beliefs on whether a bailout is a strong positive factor]

X

P[Bailout is a strong positive factor |Prior info from before you observe whether there is a disaster or not]

End Equation

Sorry I have no time to type up the derivation of this. But what does it essentially say?

If ex-ante, before observing whether there's a disaster or not you thought the probability of a disaster given a bailout and given your prior belief about whether a bailout is a strong positive factor and given your other prior pertinent data was .7

and

Ex-ante, before observing whether there's a disaster or not you thought the probability of a disaster given a bailout and given the assumption that a bailout is a strong positive factor (compare this to the bold statement above) and and given your other prior pertinent data was .8

then

Your updating factor is .8/.7; you've increased increased your probability that a bailout is a strong positive factor by 14.3%.

How do you figure out the .8 and the .7? That's the hard part (and you thought this was hard), formal models and formal empirical data, as well as informal but still complete and logical logic chains anchored to reasonable assumptions, and the logical use of informal data, which can be valuable with reliance on just very reasonable assumptions (often a lot more reasonable than those used in formal work). It can be very inefficient to ignore informal information.

Ok, I looked at your coin flip reasoning.

The probability "(after observing no disaster) that the patient had the disease and the cure worked (that the bailout prevented disaster)" is 1/3. This appears correct.

But then you say:

"My prior probability (before observing whether or not there's a disaster) that the patient had the disease and the cure would work, (that the bailout would prevent disaster) was P(HH)=1/4. So observing no disaster would increase my probability that the bailout prevented a disaster from 1/4 to 1/3."

Here's a reason why this is wrong:

Caplan asked, "how they would update their prior belief that the bailout would prevent disaster".

Your "prior belief that the bailout would prevent disaster" is 1/2, not 1/4.

You wrote, "the second coin tells us whether the cure will work" and you assumed both coins are fair, so your prior that the cure, the bailout, would work was 1/2.

It looks like your update factor is (1/3)/(1/4), and so your updated posterior is (1/2) x (1/3)/(1/4) = 2/3, but I don't have time to really check to see if that's right.

The 2/3 may not be correct. I think you may not have provided all of the necessary prior information to solve this. I'd have to pull out my Bayesian books and write this up in careful rigorous detail, clearly no time for that now.

Richard: I think this is now hinging on how you interpret the words "the bailout prevents disaster".

I interpreted them to mean "If bailout then no disaster, AND if no bailout then disaster". In other words, if you give effective medicine to a patient who really wasn't sick (who wouldn't die even without medicine), then you can't say the medicine prevented death).

So if E is "effective" (the bailout medicine would prevent disaster/death), and if S is "sick" (there will be disaster/death if no bailout medicine is given), then "the bailout prevented a disaster" means E^S (the medicine was effective and the patient was sick)

Then my prior was P(E^S) = P(E).P(S)

And my posterior was P(E^S)/P(Ev-S) = P(E).P(S)/[1-P(-E^S)]

(Since if there is no disaster it means that either the medicine was effective or the patient was not sick.)

My brain hurts.

You really have to define everything rigorously and meticulously with this, but still I think the prior probability of most interest, and the one Caplan was interested in, is P[E|S], not P[E^S].

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad