« Climate change policy and the Conservative-NDP Axis of Dimwits | Main | Campaign memes I don't understand »

Comments

Feed You can follow this conversation by subscribing to the comment feed for this post.

Have you read Pickup and Johnston on this? I think they've done a fairly good job of documenting the various house biases.

If I were looking for causes I would look to two: question phrasing and ordering and the field house under contract. I think the former matter rather substantially for the considerations which are primed before respondents are asked their vote choice. And I think the latter varies rather dramatically in terms of both quality and sampling technique.

Most of it seems to be question order and whether they prompt the respondent with the party name.

One firm during the year asks whether the country is on right track or wrong track before asking voting preference. This creates a difficulty for some people whom answer right track but would have otherwise not expressed support for the incumbent.

Nanos, does not prompt party names for his poll. Others prompt the main four, and now sometimes five on a rotating basis (prompting for greens creates their polling bump that doesn't show up at the ballot box).

Some firms now also use a hybrid online and phone poll, and seem to provide the furthest outliers.

Stephen,

A good reference check for Nanos (when it was SES) is the 2006 election. Their pre-election poll nailed the popular vote. Check it out here:

http://www.sfu.ca/~aheard/elections/results.html#2004

That site has some excellent historical election data, too. One takeaway is that Conservatives are basically at the historical vote, of between 35-40% of the popular vote (even the combined PC and Reform days have a combined vote in that range). What is unusual is the relative collapse of the Liberal party.

Have you read Pickup and Johnston on this? I think they've done a fairly good job of documenting the various house biases.

Peter, I don't know this reference - could you give me some more details about how to track it down?

And Marc, yes, I was aware that SES did well last time - do you know of an explanation for *why* they did well? Maybe they just got lucky.

A recent EconTalk podcast with Doug Rivers covered how different methodologies could lead to different results--largely due to how the sample is weighed to be representative and how they define "representative." I am not sure how they apply to the Canadian polling companies but it was very informative.

http://www.econtalk.org/archives/2008/07/rivers_on_polli.html

Another variable is whether the questions were part of a dedicated political survey or whether they were part of a regular "omnibus" surveys. If the latter, there is no way to know which questions preceded the political ones. In omnibus surveys, multiple clients buy a few questions, which are assembled together into a single questionnaire. As a respondent in an omnibus study you can be asked about your favourite blue jeans, to how often you ride the bus, and then into "who would you vote for today."

I think these two are covering it:

Pickup, Mark and Richard Johnston. 2007. “Campaign Trial Heats As Election Forecasts: Measurement Error and Bias in 2004 Presidential Campaign Polls.” Presented to the International Symposium on Forecasting, New York City, 2007.

Pickup, Mark and Richard Johnston. 2007. “Campaign Trial Heats as Election Forecasts: Evidence from the 2004 and 2006 Canadian Elections.” Electoral Studies 26 (2): 460-476.

And Pickup has error and bias corrected aggregations here: http://pollob.politics.ox.ac.uk/

As for SES, Nanos has been making much of being right on the money last time but there is nothing in statistics that suggests he should have been anywhere but inside the margin.

Thanks so much!

The comments to this entry are closed.

Search this site

  • Google

    WWW
    worthwhile.typepad.com
Blog powered by Typepad