Pages

17 October 2013

CRA, Abacus, and the 2013 Nova Scotia General Election #nlpoli #nspoli

In the recent Nova Scotia General election, Corporate Research Associates and the Halifax Chronicle Herald teamed up to provide readers with a daily tracking poll.

CRA was quick off the mark after the election to issue a news release defending its own polling, complete with the screaming headline that claimed CRA polls had “nailed It”.

A closer looks at polling during the lection and election results tells a different story.

The British Columbia Context

CRA puts its poll results in the context of the Alberta and British Columbia elections where pollsters got the outcome completely wrong.  They picked the NDP to win big. The Liberals won a majority government, again. 

In the British Columbia case, you can find plenty of commentary on how the public polling firms got the election wrong when they forecast a Liberal defeat.  The Globe and Mail called on Eric Grenier of threehundredeight.com to walk everyone through it.  Angus Reid went through the issue for macleans.ca.  The CBC and CTV also had pieces that offered different explanations of why the pollsters called the election result as wrong as the infamous 1948 “Dewey wins!” fiasco in the American presidential election that year.

The Nova Scotia Result

Let’s do the easy part first. 

If all you wanted to know was what party would win the election and the percentage of votes the party got, CRA was right on the money. 

If you wanted to get the second and third place parties, CRA called that as well. CRA’s last pre-election poll didn’t get those numbers right but they were within CRA’s rather wide margin of error.

The first problem for CRA is that dead people could have called the Liberal win without polling anyone.  As in Newfoundland and Labrador during the past couple of general elections, figuring out the NDP would lose in Nova Scotia this time wasn’t much of a chore.

The second problem with CRA’s self-congratulatory release is that they weren’t the only polling firm to get it right.  Abacus Data released its final poll of the campaign after CRA released its last one and Abacus got it even closer.

Party

Elections NS

CRA (03 Oct)

Abacus (07 Oct)

Liberals

46

47

47

NDP

27

31

27

PC

26

20

27

That leads us to a third problem for CRA:  Abacus released a lot more of their polling research such that they could explain a great deal more of what was going on in the campaign than CRA apparently did.

Take a look at this “sub-regional” analysis:

Regionally, the NS Liberal continue to be strongest in the North and South Shore/Annapolis Valley while running neck and neck with the Tories on Cape Breton. In Halifax, the Liberals have a 12-point lead over the NDP.

While Abacus apparently didn’t make much of it, they did note that the Liberals had a sizeable lead over the NDP in metro Halifax.

Compare that to CRA’s claim in its news release that the NDP collapse in metro Halifax “was not predicted by anyone, including the loss of Dexter’s seat.”   Not predicted maybe, but Abacus clearly had some data that would support anyone who thought the NDP collapse was likely. 

And Abacus made it public.  CRA apparently didn’t have that information.  If they did, then the firm didn’t apparently make use of it just like it isn’t clear what CRA is talking about when they claim that CRA’s seat projections were right.  Did they make any on October 3?

Abacus also picked up the swing to the Tories that CRA didn’t get.  Part of the reason for that could be that Abacus polled for several days after CRA stopped.  They might have picked up some last minute trending.

Missing the Non-Voters

The other likely reason for the difference is the way Abacus did its research and analysis.  They screened their respondents for those who Abacus considered were likely to vote.  To do that, Abacus scored responses to a series of six questions to build an index of voting likelihood.  The higher the score, the more likely someone was to be a voter. 

Unless CRA has changed the way it does business, their approach on the party choice question has been to discard all responses that don’t pick a party and then only deal with “decided” voters.

Who does CRA poll?

In the BC election, some pollsters made a pretty simple mistake in trying to call an election result. They presented opinion polls of the entire population.  Unfortunately, they didn’t look at likely voters.  Those people turned out for the Liberals.

CRA undoubtedly finds voters in among their sample but that’s by accident rather than design.  They almost certainly pick up people their “decided” category who have never voted at all ever, people who vote every now and then, and people who have already decided not to vote in a particular election.  That will inevitably skew the party choice numbers when you compare them.

On top of that, their party choice question is a standard one:  “if there was an election tomorrow, for what party would you vote?”  The problem is that the question starts with the default assumption that the respondent will vote.  People have to reject the premise of the question in order to tell the interviewer that they won’t vote.  Not everyone will do that.  That can add to the error.

Lastly, don’t forget that there’s a perception in our society that voting is the right thing to do. Not voting is wrong. As such, people who don’t vote may be inclined to tell a researcher a party choice even though their actual choice is not to vote at all.

CRA’s method seems to rely on luck to make sure that the number of people who won’t vote in a particular election is roughly the same as the people who don’t express a choice.  Since they don’t poll for voters, they can’t be sure.

Sniping with an Atomic Bomb

Some of the time, this method works. CRA’s polling in Nova Scotia was pretty much spot on when you look at their Liberal “decideds” numbers compared to actual turn-out.  Even if you deconstruct their results and look at the results as the share of all respondents, CRA’s numbers look to be about the same as the actual vote results as a share of all eligible voters.

In other cases, such as Newfoundland and Labrador in 2007 and 2011, CRA just didn’t come close at all.  The table below is an SRBP comparison of the last polls issued by the companies that covered the 2011 general election.

The table shows the results for all the polls as a share of all respondents in each poll and as a share of eligible voters for the actual vote result.  In some cases, SRBP calculated the numbers in order to get an apples-to-apples because different polling firms reported their numbers different ways. 

The numbers for CRA in this case were the ones provided in a poll for the St. John’s Telegram without adding in those who were undecided but who leaned one way or another.  CRA captured that information in the poll.  Had SRBP included the leaners prediction in this table, CRA’s number would have been even farther out than it was here.

 

CRA 2011

CRA wasn’t alone in being wide of the mark in Newfoundland and Labrador in 2011, but they were off by way too much for a polling firm to feel satisfied they nailed anything.  These results for pretty well every firm looked like a bunch of a people playing sniper with an atomic bomb.  You don’t need to get close.

But enough of the numbers and the intimate details. The point on accuracy is made.  Let’s switch to the bigger issues CRA raised in its release.  SRBP noted similar issues in a post-mortem series on the media use of polls in the 2011 general election in Newfoundland and Labrador.

Performance

CRA rightly pointed to the BC and Alberta elections as examples of how problems with accuracy, consistency, and reliability by polling firms undermines public confidence in the industry as a whole. 

These are not issues confined to one firm, nor to one part of the country.  By the same token, the industry won’t regain public confidence by one firm claiming they “nailed” anything, especially when they clearly didn’t perform as well as another firm.  What your humble e-scribbler dismissed as self-congratulatory twaddle isn’t an insult:  it’s just a factual description.  Don Mills doesn’t have to like it, as he clearly didn’t, but ignoring the issues won;t make them go away either

These are also not issues that suggest one firm is good, while the other is bad, that one is professional, while the others are a bunch of fly-by-nighters.  CRA is an established, experienced firm of professionals.  They get it right.  They get it close.  Sometimes they might even get it wrong.  The companies who screwed up out west are no less competent and no more professional.

If polling firms want to restore wide-spread public confidence in their industry, they will have to take some action individually and in some cases collectively to identify the problems with their public polling reports, fix them, and then perform better than they have been.

Insight

The public opinion research industry is so well established now that merely spitting out horse race reports is pretty lame.  If you look at the Nova Scotia election,  Abacus Data showed just exactly how much information firms can gather and use to inform the public.  CRA did fine work for the Herald but they could have done much more without a lot more effort.

That additional information adds insight to the basic numbers of party choice or leader support.  That insight is the sort of added value that firms can supply to meet the insatiable need of the market for more and more information.  And the insight is what sets research firms apart from media commentators who often don’t have a clue about politics, polling, or anything else beyond what they picked up on the Internet.  Polling firms have the ace in the hole for credibility – research – and more of them should really should use it.

Transparency

One of the reasons Abacus Data stood out in the 2013 Nova Scotia general election was not just that they told us more about their research than anyone else.  They told the public more about how they conducted their research than anyone else did, too.  That set Abacus apart from many of their competitors across Canada, not just CRA.

The Marketing Research and Intelligence Association represents Canada’s public opinion research firms.  MRIA has a set of standards on what firms should tell the public about how they conducted the research they release to news media.  As SRBP noted in the 2011 post-mortem, and as the 2013 Nova Scotia election showed, the research firms themselves don’t follow the standards.

Disclosure about methods will improve public understanding of how firms conduct their research and of the differences between online polling,  panel sampling, and traditional telephone polling.  They can’t know if one firm is using sound methods while another, new crowd are just pulling numbers from a magic bag. 

Improved understanding will make it harder for the problem children of the industry to spit out results that unfairly problems for everyone.  Industry leaders themselves  recognise the problem here.  Indeed, Don Mills himself complained in 2011 about some of the firms using newer methods:

“There’s a lot of people who say online research is just as good as telephone research. That has not been proven to be true and we have recent examples in Atlantic Canada where a competitor of ours has used an online methodology and have not got it within the margin of error they quoted” he said.

Improved disclosure of research methods would go a long way to increasing public understanding of the issues.  Sophisticated consumers will expect better and firms will have to respond to the marketplace.  That’s the way it works.

Expect more

Public opinion polls have been a part of media coverage for more than 65 years.  They can and should be an important part of informing the public about what is happening in the world around them.

If all you want is a simple horse race poll result, the Nova Scotian election results show that  - as CRA claimed – public opinion research can produce numbers that are pretty close to the high level result of an actual election.

If people want more than that,  if they want to understand what is going on in the public mood over time, public opinion firms can deliver it, too.  Until the industry starts to sort out its own problems, though, they will continue to hit some and miss more.  It’s that scrappy performance that really affects public confidence in the industry.

Fixing performance, adding insight for consumers, and improving transparency by firms individually and the industry collectively will do to restore public confidence than ignoring the problems or just issuing a news release that’s full of self-praise but not much else.

-srbp-