Calling all Bayesians! I need your help on two questions: first I want you to check something I said; second I want you to check something that Bryan Caplan said.

In September 2008, Bryan Caplan issued a challenge (revisited yesterday) for people to say in advance how they would update their prior belief that the bailout would prevent disaster, conditional on observing no disaster. ("Disaster" was defined as greater than 8% unemployment). He was absolutely right to issue that challenge, because otherwise people could say that their view was confirmed whatever happened.

I replied in the comments:

"I understand what "the probability of no disaster conditional on a bailout" means: P(-D/B)

I understand what "the probability of no disaster conditional on no bailout" means: P(-D/-B)

But what does "the probability that a bailout prevented a disaster" mean? Is it P(-D/B)-P(-D/-B)?

Nature tosses two coins (maybe bent ones). The coin tosses are independent. The first coin tells us whether there is a disease which needs fixing; the second coin tells us whether the cure will work. If the first coin is tails there will be no disaster, regardless of the second coin. If both coins are heads, the bailout works and was needed, so there is no disaster. If the first coin was heads, and the second tails, the bailout fails, and there is a disaster.

I don't observe the outcomes of the coin tosses, but do observe whether or not there is a disaster.

HH means -D

HT means D

TH means -D

TT means -D

So, observing -D (low unemployment) means that either HH or TH or TT happened.

If both coins are fair (this roughly matches my priors), then HH, TH, and TT are equally likely, so each has a 1/3 probability. The probability that the first coin was heads P(H./-D) is now 1/3, and the probability that the second coin was heads P(.H/-D) is 2/3. The probability that both were heads P(HH/-D) is 1/3. So my posterior probability (after observing no disaster) that the patient had the disease and the cure worked (that the bailout prevented disaster) is 1/3.

My prior probability (before observing whether or not there's a disaster) that the patient had the disease and the cure would work, (that the bailout would prevent disaster) was P(HH)=1/4.

So observing no disaster would increase my probability that the bailout prevented a disaster from 1/4 to 1/3."

First question: assuming my priors are correct, is my reasoning and conclusion correct?

Second question: can Bryan Caplan possibly be right when he states he would update his prior in the *opposite* direction?

"If the bail-out happens, and unemployment stays below 8% for the next two years, I'm going to become less confident that the bail-out prevented disaster. After all, even a near-miss with disaster should look pretty ugly. Alternately, if the bail-out happens, and unemployment hits 8% or higher during the next two years, I'm going to become more confident that the bail-out prevented disaster. I still won't be convinced, but I'll be less skeptical than I am now."

That just doesn't sound possible to me, for any priors, (and I said so in the comments), but I can't prove it.

That certainly doesn't sound right to me. If ND is no disaster, EB is effective bailout and IB is an ineffective bailout, then the posterior odds of EB given ND vs IB given ND is

[ p(ND|EB)p(EB) ] / [ p(ND|IB)p(IB) ]

where p(EB)/p(IB) is the prior odds that the bailout is effective. The only way the non-arrival of a disaster would revise this ratio downwards - so that the posterior odds are less than the prior odds - is if

p(ND|EB) < p(ND|IB)

or, equivalently,

p(D|EB) > p(D|IB)

that is, if the probability of a disaster occurring after an effective bailout is *higher* than the probability of a disaster following an ineffective bailout.

If you define 'effective' as meaning 'reducing the chances of disaster', then the claim doesn't satisfy Bayes' rule for updating beliefs.

Posted by: Stephen Gordon | January 15, 2009 at 11:43 AM

Nick,

I'm no probability expert, but on a quick read, I’d have to say I agree with Caplan, on the basis that it looks to me like he did not in fact define disaster as “anything” north of 8 per cent unemployment (my emphasis).

I think he defined disaster as “a point somewhere” north of 8 per cent, although not specifically designated, but north in the sense of greater but not equal (my emphasis).

He then defines the state of the hypothetical outcome as non-disaster.

He then admits the possibility non-disaster outcomes may range from south of 8 per cent to north of 8 per cent.

Then it seems to me intuitively that the closer the non-disaster outcome is to the counterfactual disaster outcome, the more likely it is that the bailout prevented it.

Posted by: JKH | January 15, 2009 at 11:54 AM

JKH: I would like to believe your interpretation, and it makes some intuitive sense. But I have re-read his statement, and he seems to define only two states of the world: below 8% (no disaster); and at or above 8% (disaster). Everybody makes mistakes, and this might be one.

Stephen: That's a nice clear and simple proof. But I'm not sure that "the bailout is effective" means the same as "the bailout prevented disaster". To my mind, saying "the bailout prevented disaster" is like saying "the medicine is effective AND the patient had the disease". If you observe the patient surviving, you might conclude that the medicine was effective, or you might conclude the patient never had the disease. That's where I get muddled.

Posted by: Nick Rowe | January 15, 2009 at 03:28 PM

Nick,

He says of his own view:

"Alternately, if the bail-out happens, and unemployment hits 8% or higher during the next two years, I'm going to become more confident that the bail-out prevented disaster."

If unemployment is 8 per cent or higher, and as a result he's (more) confident that disaster has been prevented, how at the same time could he have defined disaster as 8 per cent or higher?

Posted by: JKH | January 15, 2009 at 03:42 PM

I think all you need to make this work is the notion that good states are more likely to occur if the bailout is effective than if the bailout is not effective.

Of course, to do all this properly, you'd have to have the counterfactual of what would happen with no bailout. But it's hard to see how a worsening situation would lead you to revise upwards your subjective probability that the bailout was effective.

Posted by: Stephen Gordon | January 15, 2009 at 03:50 PM

JKH: OK, I see your point now (I was slow). So implicitly he has three states, and doesn't define the level of unemployment which constitutes disaster (but we know that 8.001% would not be disaster). Suppose it's >10%. Then he is saying that for u<8%, and u>10% his posterior confidence that the bailout prevented disaster falls, but for 8%

Stephen: yes, that's what I was thinking.

Posted by: Nick Rowe | January 15, 2009 at 04:57 PM

We're talking about real life, the intersection of politics, ideology, greed, etc. There are no coins. There are no repeated experiments. These are not actually random variables. Therefore, the application of Bayesian theory is inherently flawed. Applying any kind of stochastic analysis to economic outcomes makes as much sense as betting on professional wrestling or the outcome of a Harlem Globetrotters game (hint, don't bet on the Washington Generals). An assumption of Caplan's challenge is that some quantified objective measure can be used to determine the outcome of the experiment (i.e. did the coin land heads or tails?). He suggests the unemployment rate. Which is released by people firmly immersed in the world of politics, ideology, greed, etc. and who have a strong vested interest in people believing that the outcome of their actions are beneficial.

Posted by: ramster | January 16, 2009 at 11:12 AM

You should also add in the possibility that there would not have been a disaster (BTW 8+% unemployment wouldn't be categorized as a disaster by me, perhaps "a recession") but that the bailout CAUSES a disaster.

Posted by: Tim Fowler | January 16, 2009 at 12:50 PM

The real problem here was that Caplan’s articulation of both the problem and the solution were not quite so elegant as the Bayesian probability logic he was invoking.

Otherwise - Bayesians rock.

:)

Posted by: JKH | January 16, 2009 at 01:55 PM

In order to analyze Caplan's scenario, you have to extend your Bayesian analysis to several periods. Caplan's answer would suggest that the PDF is not stable - that results are path dependent. Since employment lags economic shocks and the effects of fiscal stimulus (bailout) lag several quarters, we cannot determine the PDF of "effectiveness" until we know the path. From Caplan's analysis, he would appear to be stating that a disaster has fat tails. ie. it is unlikely that unemployment would be contained to 8% with a full fledged banking crisis. It is unlikely that unemployment would exceed 8% if a full fledged banking crisis did not occur. The paradox is in the assumed PDF in the Baysian analysis. Or better still, favoring Bayes over Markov.

Posted by: clr | January 16, 2009 at 02:06 PM

clr: it sounds like you're suggesting that the random variables are not stationary, which makes perfect sense. Just remember, this is a world where nobel prize winning economists construct models using gaussian distributions to model phenomenon better suited to fat tail distributions,

because it makes the math easier. And the result is a torpedo into the good ship SS. World Economy. From the POV of someone who uses probability theory to make things that actually work (information theory as applied to wireless communications), the way in which probability theory is used in economics is a bit underwhelming.Posted by: ramster | January 16, 2009 at 02:41 PM

I don't have much time to work on this, check it, and type it in, but what I have is probably still worth writing, apologies for any errors.

I have done some Bayesian work, but I haven't looked at it in a while. Here are my thoughts:

But what does "the probability that a bailout prevented a disaster" mean? Is it P(-D/B)-P(-D/-B)?You need some additional information on causality to really answer this, including theory, or at least informal, but still solid, logic chains. What I would just say from the givens you're providing is "Not having a disaster is [P(-D/B)/P(-D/-B)] times as likely (ex-ante) due to doing the bailout."

Ok, what about ex-post. This whole thing depends greatly on how you define things, so rigor here is important (but time consuming!). I hand wrote this out on paper, and don't have time to type it up in detail, but quickly:

Posterior = update factor x prior belief

In this case;

P[Bailout is a strong positive factor | No disaster occurred AND Bailout tried (and other prior info)]

=

P[No disaster occurred AND Bailout tried | Bailout is a strong positive factor]/P[No disaster occurred AND Bailout tried | Prior info from before you observe whether there is a disaster or not, and this prior information includes prior information or beliefs on whether a bailout is a strong positive factor]

X

P[Bailout is a strong positive factor |Prior info from before you observe whether there is a disaster or not]

End Equation

Sorry I have no time to type up the derivation of this. But what does it essentially say?

If ex-ante, before observing whether there's a disaster or not you thought the probability of a disaster given a bailout

and given your prior belief about whether a bailout is a strong positive factorand given your other prior pertinent data was .7and

Ex-ante, before observing whether there's a disaster or not you thought the probability of a disaster given a bailout

and given(compare this to the bold statement above) and and given your other prior pertinent data was .8the assumptionthat a bailout is a strong positive factorthen

Your updating factor is .8/.7; you've increased increased your probability that a bailout is a strong positive factor by 14.3%.

How do you figure out the .8 and the .7? That's the hard part (and you thought this was hard), formal models and formal empirical data, as well as informal but still complete and logical logic chains anchored to reasonable assumptions, and the logical use of informal data, which can be valuable with reliance on just very reasonable assumptions (often a lot more reasonable than those used in formal work). It can be very inefficient to ignore informal information.

Posted by: Richard H. Serlin | January 16, 2009 at 04:36 PM

Ok, I looked at your coin flip reasoning.

The probability "(after observing no disaster) that the patient had the disease and the cure worked (that the bailout prevented disaster)" is 1/3. This appears correct.

But then you say:

"My prior probability (before observing whether or not there's a disaster) that the patient had the disease and the cure would work, (that the bailout would prevent disaster) was P(HH)=1/4. So observing no disaster would increase my probability that the bailout prevented a disaster from 1/4 to 1/3."

Here's a reason why this is wrong:

Caplan asked, "how they would update their prior belief that the bailout would prevent disaster".

Your "prior belief that the bailout

wouldprevent disaster" is 1/2, not 1/4.You wrote, "the second coin tells us whether the cure will work" and you assumed both coins are fair, so your prior that the cure, the bailout, would work was 1/2.

It looks like your update factor is (1/3)/(1/4), and so your updated posterior is (1/2) x (1/3)/(1/4) = 2/3, but I don't have time to really check to see if that's right.

Posted by: Richard H. Serlin | January 16, 2009 at 05:29 PM

The 2/3 may not be correct. I think you may not have provided all of the necessary prior information to solve this. I'd have to pull out my Bayesian books and write this up in careful rigorous detail, clearly no time for that now.

Posted by: Richard H. Serlin | January 16, 2009 at 05:53 PM

Richard: I think this is now hinging on how you interpret the words "the bailout prevents disaster".

I interpreted them to mean "If bailout then no disaster, AND if no bailout then disaster". In other words, if you give effective medicine to a patient who really wasn't sick (who wouldn't die even without medicine), then you can't say the medicine prevented death).

So if E is "effective" (the bailout medicine would prevent disaster/death), and if S is "sick" (there will be disaster/death if no bailout medicine is given), then "the bailout prevented a disaster" means E^S (the medicine was effective and the patient was sick)

Then my prior was P(E^S) = P(E).P(S)

And my posterior was P(E^S)/P(Ev-S) = P(E).P(S)/[1-P(-E^S)]

(Since if there is no disaster it means that either the medicine was effective or the patient was not sick.)

My brain hurts.

Posted by: Nick Rowe | January 16, 2009 at 07:15 PM

You really have to define everything rigorously and meticulously with this, but still I think the prior probability of most interest, and the one Caplan was interested in, is P[E|S], not P[E^S].

Posted by: Richard H. Serlin | January 16, 2009 at 08:20 PM