18 October 2011

Handling the Undecideds #nlpoli

Opinion polls conducted in Newfoundland and Labrador that ask about party choice measure the opinion of the entire population of eligible voters.

As such, discarding the undecided responses (anything other than a party choice) or reallocating the undecideds according to some pre-determined policy tends to distort the poll results. It doesn’t matter whether the question is about a theoretical election “tomorrow” or one that will actually occur two or three weeks in the future.

What to do with the undecideds is a contentious issue among pollsters themselves.  The technique CRA or MQO used in their election poll reports has the effect of allocating the self-identified undecideds according to the same breakdown as those already decided.

But the experts at pollster.com, for example, will insist that late undecideds tend to break for the challenger.  That’s the opposite of what MQO and CRA do.

Other pollsters handle the undecideds differently.  As Mark Blumenthal noted in a 2004 post, the Pew Institute and Gallup sometimes allocate undecideds evenly among the other choices.

What’s interesting to note, though, is that the undecideds in the American poll results tend to be less than 10% of respondents.  The decideds comprise two choices each of which is four or five times larger.  Reallocating that small a percentage or discarding it entirely does not necessarily skew the picture of public opinion that greatly.

In the recent provincial general election, MQO reported the lowest level of undecideds in its polling at 18% for its second poll and 20% for the first one. But at least one of these was not the traditional random sample pollsters historically use.

The “undecided” category appears to capture those who said they were undecided, those who said they did not intend to vote and those who refused to answer the question.

In telephone polls during the election that apparently used random samples – Telelink and CRA -  the lowest reported undecided/will vote/ refused was 26% for CRA in both its poll for the Telegram and its August quarterly omnibus.

Telelink hit 42%. 

Environics had an undecided of 30% using its online panel survey method.

With undecideds at those levels, reallocating them can significantly distort the perception of what opinion the public actually holds.

This is no small point when the polls are apparently intended to describe the opinions held by all adults over the age of 18 years not just those who may – from election to election – decide to go to the polls.

In the most recent general election, the percentage of eligible voters who didn’t go to the polls was larger than the percentage that supported the winning party.

You can see the effect of the distorting effect when you compare the poll results to the actual vote result as a share of eligible voters. All the results cited below were conducted between September 30 and October 3. 































The distortion can then lead people to draw some erroneous conclusions.  Take, for argument sake, the Telegram editorial on October 17:

Cheers: to more fun with numbers. The provincial Liberal party has made much hay with claims that by landing six seats and staying as the provincial opposition, it was somehow proving pre-election critics wrong. (The claims, of course, ignore the fact the party had the lowest share of the popular vote in its history.)

Numbers can be lots of fun, if you understand what they mean.  Election results in a first-past-the-post system depend very much on what party can get its voters to the polls in each district.  Even with overwhelming voter support – according to the distorted presentation of poll results that discount UND/will not vote – the Tories should have easily swept every district.

But they didn’t.

They lost seats.

The Tories lost seats in their heartland of St. John’s and came close to losing a bunch of others.

When voter turn-out drops, as in the election just finished, the actual share of eligible vote becomes more important.  The Telegram editorial ignores the fact that turn-out in the most recent election hit a historic low.  If voter choices had actually looked like the numbers their pollster claimed he reported, the whole election would have turned out differently!

But  - for some reason - their pollster missed a huge chunk of public opinion.  He wasn’t alone.  Only NTV/Telelink hit the number, even if it wound up being mislabelled.

In the end:

a poll released near to an election with a relatively high number of undecided voters is an indication that the questionnaire was not designed properly, and/or that the screening of voters was not conducted with enough rigor. Well-designed screening questions and well-written “who will you vote for questions” should, as a natural byproduct, produce lower undecideds in a final pre-election poll, all other things being equal. The solution is not, as some have recommended, for the pollster to make up numbers on election eve for the purpose of eliminating the undecideds, but rather to craft the survey instrument in such a way that it naturally results in fewer and fewer undecideds as the election draws near.

- srbp -


  • Monday:  “Politics, Polls and the News Media”

  • Tuesday:  “PPM:  The Polls in the 2011 Election”

  • Wednesday:  “PPM:  The Polls and the Local Media”

  • Thursday:  “PPM:  Controversy, Accountability and Disclosure”