15 October 2011

An excess of chutzpah: pollster attacks colleagues over methods, accuracy #nlpoli

Corporate Research Associates president Don Mills is criticising his professional colleagues for their use of online surveys to conduct opinion polling.

CRA uses telephone surveys. In two election polls released in September, MQO reportedly used a combination of telephone and online surveys to prepare it’s results. Environics used an online method and Telelink used telephone surveys.

According to the Telegram:

An in-depth poll by CRA conducted for The Telegram came closest to election night results, Mills said.

The Telegram noted:

CRA, using a telephone poll based on a sample size of 800, predicted 59.5 per cent support for the PCs among decided voters, 24.7 per cent for the NDP and 15.8 per cent for the Liberals.

The actual election results were 56.1 per cent for the PCs; 24.6 for the NDP; and 19.1 per cent for the Liberals — a total difference of 6.8 per cent from the poll prediction.

The only problem is that claim isn’t true.

Like all of the opinion polls released during the campaign, Mills and CRA polled eligible voters.  They did not report screening for voters only nor indicate any method by which they determined whether those opinions they surveyed related to people who would vote only.

By surveying all eligible voters, Mills and CRA should have reported all their responses, including those who indicated they would not vote or had no opinion.

That’s what the Telegram did in it’s front page story on Thursday.  The numbers cited in the Telegram on Saturday disregard some responses and therefore  present a distorted and misleading impression of what CRA’s polling found.

Here’s what the Telly reported compared to the actual reported vote result on Tuesday as a share of eligible vote:

Telegram
Sept 30 - Oct 3

Actual Vote Oct 11

CRA Apparent Error*

       

PC

44

32

+ 12

LIB

12

11

+ 1

NDP

18

14

+ 4

UND/Will not vote

26

42

- 16

Note:  The figures do not add to 100% everywhere due to an apparent minor rounding or typographical error in the results as reported by the Telegram.  SRBP adjusted the UND by one percentage point from what the Telegram reported.  When SRBP contacted the Telegram for more information on the poll, the newspaper management refused to discuss the results at all beyond what was in the published stories.

Even allowing for that one percentage point, the published CRA results are significantly different from the actual result.

SRBP compared most of the polls in a pre-election post.

Compared to CRA, MQO** was off by about the same proportions using its hybrid method. CRA was off by the same country mile in 2007.

Environics was closer to the final actual result than either of those two.

Of all surveys released during the campaign, Telelink came closest to the actual result, just as they did in 2007.

SRBP will have more on the polls in the recent general election in a series starting on Monday.

We’ll look at:

  • the polls themselves, what they reported and how they reported it,
  • compare the poll findings with the actual results,
  • tackle the comments by Liberal leader Kevin Aylward,
  • look at poll reporting standards in the news media and in the polling industry, and
  • and look at the way the local media used polls in the past two elections.

- srbp -

Related: 

  • Comparing polls”  By Horizon Research, a New Zealand opinion research firm that uses online polling.  Horizon questions the validity of discounting upwards of 30% or more of responses when reporting survey results.
  • Two wrongs and you get a news story”  discusses the way CRA’s reporting of decideds produces a misleading impression, in this case from 2009, of an increase in support for one party when it actually declined.
  • CRA has been known to engage in controversial practices, like releasing a poll just before a by-election vote, significantly ahead of its usual schedule for releasing its omnibus for that quarter.

*  Apparent error refers to the discrepancy between the CRA poll result reported by the Telegram on the front page of its Thursday edition compared to the actual vote result. 

All polls contain error. Researchers strive to reduce known and possible sources of error. 

** Edit to correct the comparison.

6 comments:

Peter said...

All very interesting, but mostly bull. To compare undecided and unwillung voters before a vote with those who simply didn't show up is absolutely bogus. They are apples and oranges. Comparing decided voters to voters who actually did show up is the fairest and most honst comparison. I don't know why you would distort the facts in this way, unless it's just sour grapes over not being able to convince a media outlet to give up exlusive information to one particular blogger, particularly a partisan one.

Edward Hollett said...

Well, Peter, absolutely none of what you just said is true.

That will become painfully obvious as the week progresses.

When it comes to bull, though, thanks to confirming your undisputable expertise.

Mark said...

I am always disappointed in the number of people who say they're going to vote undecided and then decide not to.

Peter said...

Mark: Not sure who's argument that ridicules. Ed is lumping all the undecideds in with the no-shows. The no-shows may include decideds, undecideds, and "won't vote"s. There are a lot of rings and fields here, and Ed seems to be cherry picking comparisons just to establish his foregone conclusion.

Edward Hollett said...

I think it's just a joke, Peter.

Mark said...

Ceci n'est pas un pipe.