09 November 2015

The polls must be wrong #nlpoli

Over the past couple of weeks,  some people have been questioning the accuracy of public opinion polls.

People have questioned the polls in the federal election, especially after the defeat of two candidates in metro St. John’s a lot of people thought would win.  The two polls released last week show the Liberals with such a commanding lead that some people – especially Conservative and New Democrat supporters are doubting the accuracy of the polls.

If you are a Conservative and think the Conservatives should be doing better, then you may be disappointed by what follows.  But if you are interested in a better understanding of polls and what you are seeing in public, then read on. You should always look closely at public opinion polls to make sure you understand what you are looking at.

What are polls?

Public opinion polls have been around for almost a century.  They are based on the simple idea that you can find out what a much larger group believes by asking questions of a smaller group of the same people.

The answers you get from the smaller group will reflect the opinion of a larger population, if your group – called a sample – matches the entire population for things like age, sex, education, income, and so forth.

Some sources of error

To conduct a poll, you have to figure out some things.  You have to figure out  things like the size of the sample you will use, how to pick the sample, and how to collect the information.

How you ask the questions can produce error. Everything from the number of choices you offer to the order of the questions themselves can influence the responses you get.

You can also run into problems depending on how you collect your information. If you pick your sample by using the telephone book  - these days it would be the telephone company’s database -  you can miss people who don’t have telephones. 

Some people don’t have telephones, hard as that is to believe.  These days a larger number of people would have a cell phone instead of a landline but the basic problem is the same.  You would have to take account of those folks in your plan. If the people without cell phones are a n important enough part of your population you’ll have to take account of them somehow,

All those things will affect the outcome of the poll you are going to conduct.  And , not surprisingly, pollsters take measures to deal with those potential sources of error.

Some recent mistakes

Polls in British Columbia had the NDP as the sure-fire winners in the 2013 provincial election.  The Liberals actually won.

Angus Reid blamed respondents to its surveys for the error.  They lied,  according to a spokesperson:  said one thing.  did another. Young people said they would vote for the NDP by a factor of two-to-one and then didn’t show up at the polls.

Not really.

The problem in British Columbia was that pollsters surveyed the entire population.  Their sample was likely very accurate when it came to the population as a whole.

The problem for pollsters in that case is that all people in the population as a whole aren’t equally likely to vote. Young people might have endorsed the NDP at twice the rate as young people picked the Liberals, but young people are notoriously unlikely to vote.

Among those more likely to vote, that is, among older respondents, the Liberals fared better.  Had pollsters adjusted their poll reports to look at the opinions of people who were more likely to vote, they’d have likely gotten a result closer to the election result.

The thing to bear in mind is that polling firms have already adjusted their techniques to make sure they don’t make the same sort of mistake again.  Some have stuck with older concepts – like using only landlines – while others have explored selecting a sample out of a representative pool of people online who have already agreed to take part in a survey. 

The local polls and the media

We’ve seen a similar problem with some polls locally.  In 2007 SRBP noted a huge difference between Corporate Research Associates polling and the actual results in the 2007 election.

One of things that became quite obvious as your humble e-scribbler pulled things apart was that CRA surveys all people eligible to vote but then reports the results as if they were talking to likely voters.  The discrepancy between what CRA projected and what happened was quite striking. 

The one place where most pollsters have a problem in Newfoundland and Labrador is actually measuring who will vote and who won’t.  In 2007, the polls were about 20 percentage points off the number of non-voters.  Even after you use a variety of fiddles that pollsters use – like discounting no opinion, will not vote, and “undecided”, you can still wind up with fairly sizeable differences

Those sorts of problems don’t  just affect our view of particular election outcomes. For example, there was no doubt in either 2007 or 2011 that the Conservatives would win.  Had CRA been more accurate or more detailed,  the result would have been the same.

What we should wonder about is the impact public opinion polls had on opinion itself as they were released between 2003 and 2010.  The Conservatives, the news media, and others, placed great stock in the accuracy of CRA’s polling during the peak of Danny Williams’ personal popularity.  the attitude that “he was right because he was popular and popular because he was right”  undoubtedly played a role in creating a climate in which dissident views were discounted if not actively suppressed. 

In 2011,  local media relied heavily on public opinion polls as the source of coverage.  The Telegram hired a pollster to conduct surveys and used those surveys as the major part of of its election coverage.  Environics and Nalcor’s pollster – MWO – also released polls during the election period.

The accuracy of polling varied widely compared to the final result, but remember that most of the pollsters weren’t really reporting on opinion among likely voters.  They were reporting on the opinion in the whole population and, for the most part, they were doing it from a very high level.

In the 2015 federal election, the polls toward the end of the lections showed a steady trend upward in Liberal support. They may have varied from one to another in the numbers they found for any one party but they all found the general lay of the land.  What made the difference between the last poll taken and the official one on voting day was time.  The trends identified in the earlier polls continued.

In the handful of local polls,  they found the trend away from the NDP in St. John’s South-Mount Pearl. Had anyone done a poll in St. John’s East, they’d likely have found what was happening in that riding as well.  The problem the public had is that no one polled St. John’s East.  They assumed the outcome based on previous polls.

We’ll talk about that on Tuesday when we look at assumptions and seat projections.

-srbp-