« Modern medicine, warts and all | Main | Strategy space and the theory of monetary policy One »


Feed You can follow this conversation by subscribing to the comment feed for this post.

Great post.

On your last question: Yes, if it's the only model you're using. No, if you're using it as one reference point among many, or if you're running the model many times and comparing the relative changes rather than the static values.

Regarding your philosophical question--although I warn you that I'm not an econometrician--I would have to say that 'yes' your subjective estimate is still biased. At least if you don't know the bias.

My thinking: if the estimator is biased, then on average after many repeated experiments the estimated parameter value will be different than the true value of the parameter. You know this will happen on average but you don't know the size of the problem--so that you can't apply a simple correction which would work on average. Or more concisely, you have a prior belief in the parameter value, you update that belief via Bayes' Law for the result of the experiment and then add on top of that your prior belief in the likely bias. Then your subjective belief in the parameter's true value is the experimental result added to this bias--which is unknown and so basically that belief is white noise, i.e. uninformative, and hence the new subjective belief is a random variable distributed around the biased estimate.

That's just a guess, but I think it's right.


"Philosophical question for econometricians: if an estimator is biased, but you know it is biased, even if you don't know how big that bias is, is your personal subjective estimate still biased?"

It seems to me that the answer to this question has to be yes without additional assumptions of the behavior of the estimator and the availability of data. That said, even if the estimator is unbiased it's likely to be incorrect for a low number of observations. For example the estimate of the mean is a random variable from your perspective: what's the probability that you hit on the correct value?

Even without selection bias, you might not know how much disagreement there really is.

Actually I think that you already blogged about the answer to this question (a blogpost that I found very interesting): http://worthwhile.typepad.com/worthwhile_canadian_initi/2011/12/why-reality-is-skewed-and-so-newspapers-are-biased-towards-bad-news.html

Posting about something that you already agree with has less informational value to your readers especially if you share them with people you comment on. The solution is to do something like Yglesias or like other blogers (even Tyler Cowen) do: create a "link" post from time to time where you can point your readers to posts that you agree with even if you do not have anything else to add. Actually, I think I was made aware of your blog in a similar way, when I was linked here two or three times and I found a very high quality content with a very high quality comments that I enjoyed very much so that I became regular reader.

But I think you still need to be original and have your own content so that you attract new readers. This was one reason why I reduced my visits Brad DeLong's blog as most of the time for topics that I was interested in he was just reposting something somebody else just wrote about few days ago.

"(Philosophical question for econometricians: if an estimator is biased, but you know it is biased, even if you don't know how big that bias is, is your personal subjective estimate still biased?)"

yes, but not for the reason you think: if you think your estimator is biased but do not know how big that bias is, its because you have no clue what the underlying distribution is and therefore cannot do something like bootstrap the bias. therefore, you have no clue and your estimate is just a fancy guess. even if the mean is unbiased, the standard error of your guess is astronomical.

I think krugman is considerate in not linking to your posts, more readers = worse comments. Of which I'm a prime example. If it is not obvious, I'm denigrating the nyt commenters.

This is one of the benefitical things about conversation.

Often it takes a _lot_ of conversation to root out why or even _that_ people have deep underlying differences on significant matters -- it is even hard to root out an articulation of deep but unspoken background understanding which ground whole conversations.

Extended conversation are often required to see that surface disagreements contain within them extended agreement -- and often also deeper unarticulated alternatives in perspective.

Thomas Kuhn mines some of this territory, but the best writer on the topic is Larry Wright.

Haven't heard of Larry Wright? Many in the business identify him the most significant thinker on the topic of teleological explanation since Aristotle.

See on the point of conversation and reflection Larry Wight, "Argument & Deliberation: A Plea for Understanding"

I think there are two issues. 1) Minority views will get a disproporionate amount of blogtime. This seems like a good thing, because minority views are disproportionately important (every consensus view was a minority view at one time).

The bigger problem is 2) "Settled" issues get very little blogtime. How many blog posts have you read lately about, say, free trade? That's fine from an advancement-of-knowledge perspective, but a big problem from the influencing-policy perspective.

(I've never been good with stats, but I think the answer to your philosophical question is yes. Suppose the estimator has a bias of either exactly 1 or exactly 3, depending on some assumption in your model. So you adjust your subjective estimator by -2. But then you know your subjective estimator has a bias of -1 or +1. The more interesting fact is that you no longer know the sign of the bias.)

I just realised: This post is (only very indirectly) about the Euro. I have written very little about the Euro in recent months. This isn't because I think it's unimportant; I think it's possibly the most important story since the collapse of communism. It's not because I have nothing to say: I have lots to say, but nothing I haven't already said or that someone else isn't already saying.

The only thing I've posted about the Euro recently is when Paul Krugman (and Simon Johnson) either misspoke, or wasn't clear (because he definitely did not repeat the point in his next post surveying all the issues), and so I posted to say I disagreed (about permanent productivity differentials).

Ryan V.: "(I've never been good with stats, but I think the answer to your philosophical question is yes. Suppose the estimator has a bias of either exactly 1 or exactly 3, depending on some assumption in your model. So you adjust your subjective estimator by -2. But then you know your subjective estimator has a bias of -1 or +1. The more interesting fact is that you no longer know the sign of the bias.)"

A lovely clear example, to help me make my point, where I disagree with you (and all the other commenters!).

But you know that *all* estimates (as opposed to estimators) will be wrong. They always have a sampling variance around them, even if they were based on unbiased estimators. Your *new* estimator (after you have adjusted it by -2) is by assumption an unbiased estimator. The expectation of your new estimator equals the true parameter (or the other way around?), so it must be unbiased.

In the spirit of this post, let me point out where I disagree.

1) The Darwin Awards thing doesn't work. If the ECB dies and a host of independent central banks take its place, directed by arrogant gits, then it's a huge success in Darwininan terms. For mortal organisms it's all about reproduction.

2) Krugman has discussed whether the Euro is worth saving. He thinks it is, because its demise would be a huge setback for the project of European unification. Here, unusually, I disagree with him. The Euro is causing far more dissension than a breakup is likely to do. It's a nightmare from which the EU is struggling to awake.

Kevin: Damn! I think I agree with you.

But you forgot one really important point! Even if the Euro does force unification in the Eurozone, it would be a forced unification done in order to prevent a crisis, rather than a voluntary unification. A "shotgun wedding" as I once called it.


"Paul and I both want easier monetary policy for the ECB, but he thinks of that as a temporarily higher target inflation rate, and I think it would be better to have a temporarily higher NGDP growth rate leading to a level path for NGDP."

I feel like this is the typical argument that I see between market monetarists and new Keynesians. The fact that it's so common illustrates the selection bias perfectly. Since the ECB has made it pretty clear that it won't adopt either policy, the stakes in this debate are low. Yet we need to look for disagreements. I think that the wish to signal tribal loyalties also affects this, in both directions: we choose needless disagreements just to signal our political orientation or the theoretical school we favor. On the other hand, the wish to appear reasonable and openminded creates an incentive to find agreements with bloggers outside our tribe, at least once in a while.

Nick, This is off topic. Re your question on the Stumbling and Mumbling blog as to whether the TBTF subsidy of British banks is £120bn a year, I just looked. The relevant Bank of England paper does not say anything about “per year” or any similar phrase, which is a bit sloppy. But presumably they mean “per year”.

I’m not an econometrician, but I’ll take a shot at the question, “is your personal subjective estimate still biased?” My answer is ‘no’, provided my estimate (would it not be redundant to insert ‘subjective’ here?) of the magnitude and direction of the estimator’s bias is not itself biased.

If you knew the bias then you could produce an unbiased estimate. So knowing bias exists, the best you can do is inflate your standard error to account for this additional source of error. Ah, but how much?

There's a new (to me anyway) wrinkle in the euro problem. Near 0% nominal rates and money market funds are incompatible, and the banks are closing European money market funds to new money. All you can do is withdraw.

At this point the funds will have to make the investor carry all the risk, even if returns are negative. Clearly, the banks don't feel there is any market for this, or at least don't want to be the first to make the move.

This was precipitated by the ECB reducing its target rate to 0.75% (the discount rate is 0.25%). At something like these rates, deposits will stop being profitable unless they offer negative rates.

Such changes are bound to be disruptive in the short term, since borrowers from the funds will have to find other sources (and good luck with that).

5 year US tips now pay -1.080% and the regular 5 year notes pay 0.875%. That means people will pay 1.955% to protect their principal in real terms. So real 5 year interest rates are -1.955% and falling.

This is all unknown territory, so if you want a monetary topic with euro aspects, you've got one.

How low can real interest rates go, and what are the consequences for the US and Europe of -2% or less real rates (measured against the dollar?) and near 0% or negative nominal rates.

Can nominal rates on bank deposits go negative? Will people withdraw their money, if they lose money by depositing?

What effect is this going to have on European banks, which are effectively insolvent?

Isn't this likely to increase the demand for safe assets, and lower the rates even more on the 5 year Tips?

"There is a big selection bias in what we blog about. People reading blog posts who are unaware of this selection bias will get a very distorted view that considerably exaggerates how much disagreement there is between economists.

Plus, I am much more likely to write a post if I disagree with someone like Paul Krugman who has lots of readers than to write a post if I disagree with some other blogger who has fewer readers. This means that the average reader of blogs will observe more disagreement between bloggers than the average blogger observes. (I think I said that right). This amplifies the bias towards reading about disagreement.

All of this is also true for comments on blogs."

This week I read a comment thread at The Money Illusion which was dominated by a furious back and forth between Austrians and Chartalists. If you read that thread you might think that those are the only kind of economics enthusiasts who visit Scott Sumner's blog, but that is of course completely false. Nevertheless it was hysterical as there was absolutely no common ground there except the abslute certainty that Scott is totally wrong. And to make it even more amusing the title of the post is "The Ironies Keep Piling Up."

Sorta OT
Niok, any reflections on monetary policy, particularly points 4 and 5?

Yay! A post where my novice status and lack of a blog of my own renders my opinion potentially useful. Sweet! I think many people who read blogs understand that silence on the bulk of an issue while bringing up specific points of contention with translates roughly to tacit agreement on the undisputed points. It's almost like we see certain points as being uncontested, but the contested bits are the most interesting. As to missing posts, where the writer says nothing because it is being said elsewhere, linking clarifies one's position beautifully. I know what Mark Thoma, PK, Tim Duy, and Brad Delong all think because they link to what they agree with, sometimes taking issue with specific points. (Thoma is particularly good at this.)

Btw, you get mad linkage- or at least hard core blog roll recognition. That's how I found this blog, and it's a great thing for me because you explain things really well. I often get that "oh sweet lords of kobol, I finally understand!" after reading your take on the issue. Many bloggers are neither as patient or as detailed.

Iif an estimator is biased, but you know it is biased, even if you don't know how big that bias is, is your personal subjective estimate still biased?

Let's call my estimator B.
Let's call the thing it is supposed to estimate b.
Let's call B unbiased if and only if E(B) = b

You're telling me that I know E(B) <> b. Perhaps I even know whether it is > or <. But I certainly don't know Bias = E(B) - b.

Can I adjust E(B) somehow to make it equal to b? The only way to do so is to substract "Bias" from it. But I don't know what amount that is. That means I certainly can't show that E(B-with-my-bias-correction) = b. By my original definition, I can't say this new estimator is unbiased.

If we want to square this circle, we'd have to find a way to simultaneously think that (a) we don't know how big the bias is, and (b) we can nonetheless precisely compensate our estimator for its size.

Simon: do you have a subjective estimate of the expectation of the size of the bias? (How can you not have a subjective estimate of the size of the bias? Wouldn't you be willing to take a bet on the sign and magnitude of the bias, if the odds looked favourable enough?) If so, can't you just add (or subtract) it from the original estimate? If so, isn't your new estimate, by construction, subjectively unbiased?

Ralph: thanks. That is very sloppy by the BoE. They might also have meant the Present Value of the subsidy (in which case it would have the units "pounds", rather than "pounds per year"). Obviously, it matters. (I expect I should read how the BoE constructed their estimate, then figure out what they should have meant by it).

Brad: I think I see. Your "highlights" approach is one way to solve this problem. Not sure if that works for all blogs though. Different blogs/bloggers just have different "characters".

Kms: "I think many people who read blogs understand that silence on the bulk of an issue while bringing up specific points of contention with translates roughly to tacit agreement on the undisputed points."

I think that's usually the correct inference to make. But not always. Sometimes we don't say anything about points we disagree with because we don't have the time to cover everything. But even if correct, it only works if the reader has read both posts.

You could place a prior with your beliefs and in case you are right the results should be ok.

Nick: I think we trying to argue that the size of the bias is "unknown" (with respect to some measure that we have not really defined) but we surely can estimate it (with respect to some other measure that we have not really defined other than to call it "subjective") well enough to correct for it.

I think that also means that our estimate could still be "biased" after we do the "subjective bias-correction."

"How can you not have a subjective estimate of the size of the bias?"
Not all distributions have well-defined first moments (i.e. the expectation might not exist.) But's that more a technicality. People who understand more about this kind of issue than I do usually invoke "Knightian Uncertainty" at this point in the conversation or start to talk about "higher-order" uncertainty (uncertainty about the kind of uncertainty that you face.)

The betting/handicapping argument deserves a second short reply.....

Simon: "Not all distributions have well-defined first moments (i.e. the expectation might not exist.)"

Hmmm. I never knew that. But thinking about it, it seems you are right. Is this an example?: there's a 50% chance x=0, a 25% chance x=1, a 12.5% chance x=2, a 6.25% chance x=4, etc.

"The betting/handicapping argument deserves a second short reply....."

I thought you could always infer someone's subjective probabilities by observing their choices under uncertainty? ;-)

"Wouldn't you be willing to take a bet on the sign and magnitude of the bias, if the odds looked favourable enough?"

Good question, but the answer might surprise you.

Let's be clear on the terms of our wager;
- we agree on how we measure the outcome (b).
- I then pick a threshold (B).
- You then get to pick which side of the wager you want. You have two choices.
1)If b=>B, you get $X. Otherwise, I get $X.
2)If b=>B, I get $X. Otherwise, you get $X.

Let's also be clear on my strategy in picking B. I think you're very smart: so smart, in fact, that you'll always choose between (1) and (2) optimally. Since I'll always get the short end of the stick, the best I can do is to play maxi-min: I want to maximize the minimum of those two choices. That means equalizing the utility I get from them. That means offering you a "fair" bet.

Will you agree that, as a rational and optimizing sort of guy, I may very well decline to play such a game with you? Even if you pay me a bit to play with you?

Simon: OK. I would instead get a large grant (because this will cost me money) and have a computer generate a lot of different bets at random. Offer them all to you. Then see which ones you take.


Yup, the St. Petersburg paradox is one example where the expected value does not exist. A random walk is another.

As for inference about my preferences, it depends crucially (IIRC) on two assumptions
1) I'm maximizing expected utility.
2) I'm risk neutral.
Do either of these matter?

Well, people in finance (my university claims me as a Prof. of Finance, so that is supposed to include me) try to draw a distinction between risk-neutral and subject probability measures. They observe the outcomes about these kinds of bets all the time (from market prices of options or other derivative securities). That can tell us about how agents might value $X in different states of the world, but not about the relative probabilities unless we further assume agents are risk neutral.

But I was thinking more about the (near-universal) expected utility assumption. Some people think that's a bad way to proceed, esp. when we start to think about uncertainty about distributions. Let me give a quick example: that betting game I'd described.

I might have a subjective distribution for b. But suppose I think you know more about b than I do? (I said I thought you were pretty clever.) I might think that you're pretty likely to pick the end of the bet that makes me lose, regardless of where I set the breakeven. That makes my expected losses >0, regardless of how I choose B. If I think you actually know b, I should refuse to play the game unless you pay me at least $X in order to play.

Some argue that this kind of extreme uncertainty aversion (assume nature gives you the worst possible outcome) is the right way to think about uncertainty about distributions. That's debatable....but I'm just using it to argue that the expectations hypothesis is not universally accepted as the way to handle uncertainty about distributions.

Simon: Hmmm. I *did once* know about the St Petersburg Paradox. I must have forgotten. Age again.

Speaking of failing memory, I'm trying to remember the Savage-von Wosit axioms(?), which are supposed to be able to prove you can infer someone's subjective expectations even if they are risk-averse (provided they max E(U)). I think it involves comparing two bets, one with a known probability and one with an unknown, and asking the person to choose between two (or more) bets. If he's indifferent between winning $100 on a coin toss that comes up heads, and winning $100 on event x, then he must expect that event x has a 50% probability.

Nick: I don't know that axiom (but my ignorance of such things is mammoth, so that doesn't mean much.) The example you mention makes intuitive sense, but the more I thought about, the more skeptical I got. Let me see if I can give you an example.

Let's keep the $100 coin toss. We know the probability of winning is 50%. We also know that the probability of winning the toss is independent of anything else.

Suppose you're indifference between that and $100 of unemployment insurance. We don't directly know the probability that you'll be unemployed, but we know that the insurance pays out when your marginal utility is higher than average.

If you're indifferent between the two offers, I deduce that your expected utility from the two offers is the same. But I know that the marginal utility on the UI payments is higher. Can't I conclude that your probability of unemployment must be less than 50%?

Simon: I found this on the lesswrong blog, but it's not at all easy going.

I don't quite get your UI example. Aha! Yes I do. Let me do a similar example:

I offer you a choice between:

A: I give you $100 if the coin toss is heads (nothing if tails).

B: I give you $100 if your horse wins the race (nothing if it loses).

Suppose you are indifferent between A and B.

My first thought would be to conclude: "Simon must think his horse has a 50% probability of winning".

But your response is: "Not necessarily, because if my horse wins I will also win lots of other money too, so my MU will be lower, so I would like insurance against my horse losing, so it means my subjective probability is greater than 50% that my horse will win".

Hmmm. I wonder how the Savage axioms get around that? Even if we watch to see that you don't win any other prize money if your horse wins, you might have state-dependent utility: "If my horse wins, I will need cash to buy champagne to celebrate".


Good point.

Nick: we must be very careful when devise "random" experiment. Horse racing is not basically random. And race result are correlated. A strong horse-rider will be more successful.As Damon Runyon once said "The Bible says the the race is not to the swift and the battle to the strong but to those who put their faith in the Lord though wise money will not bet that way" IIRC..
The best horse-rider team will win, the reason my career in my university racing team (Ken Kesey has done nearly everything) is today totally ignored by the masses. You can restore randomness (and fun in the stand) with systems like the claim. An owner pays to enter the race ( that's the claim) but must sell his horse to anyone for the claim amount. It keeps everyone honest: enter a strong horse in a weak field and you will both win the purse and lose the horse or enter in a strong field and keep the horse while losing the purse. But that system equalize chances and produce a kind of randomness.

This is not to produce a point of pedantry but to remind everyone to be careful about experimeent design and assumptions.
And the former physicist in me still thinks that the only truly random events are nuclear disintegration...

The comments to this entry are closed.

Search this site

  • Google

Blog powered by Typepad