 You can follow this conversation by subscribing to the comment feed for this post.

I think you may have it backwards. Doesn't frequentist statistics only test to see if the null hypothesis can be rejected as false? Making no statement about the actual truth of the model? Whereas a Bayesian would be prepared to make a claim about the truthfulness of his statistical estimation, ex post?

Ha... I just realized that this isn't your dictum, but George Box's :) But still, the question remains...

The null hypothesis is generally presented as 'The data were generated according to the following process.' We either reject it or not. Since we know with probability one that the data were not generated by that process, what's the point of testing the null? And if the alternative isn't well-posed, what's the point of rejecting it?

A Bayesian will assign a *probability* to a hypothesis, and only if there's a well-specified alternative. If only one model can be used he'd choose the one that minimises expected posterior loss. But first, he'd ask why he has to choose only one in the first place.

I see your argument now... I'm grasping at straws here but while we know that the model is false with probability one (assuming an estimate can take on any value of the real line right?), we still have to test because if we can reject the null we can rule out a possible model. This is something you don't know a priori. Essentially it's a ruling out process, not a ruling "in" but it still must be undetaken.

There's little point in ruling out a model that you already know to be false if you don't have a better alternative (which is also false, but perhaps a better approximation) at hand.

>>If we know a priori that the hypothesis is false, what's the point of testing to see if it is true?

We don't know it's false.

The comments to this entry are closed.

• WWW