2020 Election: Josh Van Veen – Can We Trust The Polls?



[ad_1]

COMMENTARY:

Is the outcome of the 2020 election really the foregone conclusion that polls and commentators suggest? Josh Van Veen suggests otherwise, pointing to some of the shortcomings of opinion polls that could cause some politicians to say “screw the pollsters” on election night.

In November 1993, opinion polls heralded a comfortable victory for the current National Party. But there was no clear result on election night. For a brief moment, it seemed that Mike Moore’s Labor Party could regain power with the support of the new Left Alliance. The surprise led then-Prime Minister Jim Bolger to exclaim: “Fuck the pollsters!” To his relief, the final count gave National a one-seat majority.

Twenty-seven years later, polls suggest that Jacinda Ardern is on the cusp of forming her own one-party majority government. Bolger was the last prime minister to enjoy such a mandate. The 1993 general elections ushered in a new era of multi-party politics. It would be followed by a succession of coalition and minority governments, up to the present. But this era could soon end.

At the time of writing this report, Labor is projected to win more than the 61 seats needed to govern alone. Statistician Peter Ellis estimates a 0.1% probability that National will form the next government. These numbers may sound outlandish, whatever your policy, but they are based on highly credible data from the two most successful survey companies in the country.

In the last nine months, 1News / Colmar Brunton and Newshub / Reid Research have published a total of seven polls between them. They have told more or less the same story. In the wake of the first shutdown, support for Labor reached record highs, while National collapsed to less than 30 percent. The law has increased, the Greens are dangerously close to the threshold, and NZ First languishes around 3 percent.

With Labor leading by such a wide margin, it seems the election is more or less a foregone conclusion. But is it really? In 2017, the latest Reid Research poll had an average discrepancy of just 0.7 percentage points when estimating support for major parties, compared to the final result. Colmar Brunton and Roy Morgan were out by an average of 1.4 and 2.7 points respectively.

While these differences are generally within the reported ranges of sampling error, one or two percentage points can be crucial. If, for example, National had kept its election night support of 46 percent in the final tally, it is quite possible that Bill English would still be Prime Minister. That is why surveys are more useful for reading trends than for making predictions.

In 2020, commentators and journalists have ruled out the possibility of a national victory. The wisdom received is that the majority of voters have already made a decision and it is unlikely that much change in public opinion will occur in the next month. But this overlooks the number of undecided and hesitant voters. In the 2017 Election Study in New Zealand, for example, around 20 percent reported having made a decision within the past week (including on Election Day itself).

Despite the Labor Party soaring to 60 percent, Prime Minister Jacinda Arden told Mike Hosking of Newstalk ZB that she always had a “healthy skepticism” about the polls. Audio / Newstalk ZB

In Colmar Brunton’s latest poll, 10 percent of respondents said they were undecided and 4 percent refused to answer. Top results (eg Labor 53 percent) are calculated excluding respondents who “don’t know” or refuse to say. If we included the undecided in the calculation of party support, Labor would be at 47 percent. Those undecided voters could at least determine whether or not Labor rules alone. Furthermore, it is impossible to know how committed individual respondents are to voting in a particular way, or even voting at all.

Although respondents are asked “how likely is it” that they will vote, neither Colmar Brunton nor Reid Research takes into account the effect of not voting. In other words, the probability that someone will vote based on their demographic profile is not assumed. This means that while their samples are representative of the general population, it is difficult to know how representative they are of the voting public.

Some are more likely to vote than others. For example, those over 70 had a turnout rate of 86 percent in the last election compared to just 69 percent for those ages 18 to 24. It is possible that an unrepresentative sampling of certain age groups could explain the historical discrepancies between the polls and actual support for NZ First and the Greens. Last time, Colmar Brunton underestimated support for NZ First by 2.3 points, while Roy Morgan overestimated support for Green by 2.7 points.

The reported margin of sampling error generally means that we can be 95% confident that a poll is no more than “plus or minus” a few percentage points of true public opinion. However, that figure refers to a 50 percent result. In the Colmar Brunton example above, the margin of error for NZ First was approximately 1.4 percentage points. In other words, the survey was dubious. This is said to happen five out of a hundred times.

But the sampling margin of error does not measure other possible sources of error, such as the effects of the interviewer and the wording of the questions. There is also the problem of how trustworthy the respondents are. In 1992, after polls failed to predict a Conservative victory in Britain, an investigation found that some respondents had likely lied about their voting intention (“the shy conservative factor”). Such effects are impossible to quantify.

However, the more recent experience of Great Britain (2015) and the United States (2016) suggests that systematic error in surveys is more likely to be the result of assumptions about participation. To a large extent, the polls for the 2016 presidential election did not register Trump’s support in the so-called “Rust Belt” states because pollsters did not sample enough white voters with no college education.

After the 2015 British General Election, an independent review determined that pollsters had significantly underestimated those over 70. This was due, at least in part, to the use of online panels such as the one employed by Reid Research to supplement its telephone sample. Interestingly, some evidence was also found that people most likely to answer the phone were much less inclined to vote for the Conservatives.

The fact that Colmar Brunton and Reid Research make no assumptions about participation could be a strength. But in the end, polls are not an exact science. No survey design can fully capture all the complexities of human psychology and voting behavior. There will always be a degree of uncertainty. The extent to which a certain survey is correct or incorrect may, in fact, depend on how it is reported and framed by the media.

To better inform the public, TVNZ and Newshub should report the estimated range of support for the party rather than a single figure. They could also disclose the response rate (probably under 30 percent) and provide a full disclaimer of survey limitations. But that would mean less sensationalism.

So can we trust the polls? The answer will only have to wait until election night.

Josh Van Veen is a former member of NZ First and served as a parliamentary researcher for Winton Peters from 2011 to 2013. He holds a Masters in Politics from the University of Auckland. His thesis examined class voting in Great Britain and New Zealand.
This column was originally published by the Democracy Project



[ad_2]