Election 2020: How much can you trust the polls?

8:32 pm on 11 September 2020

By Josh Van Veen*

Comment - Is the 2020 election result really the foregone conclusion that the polls and commentators are suggesting?

Josh Van Veen suggests otherwise, pointing to some of the shortcomings of opinion polling, which could have some politicians saying "bugger the pollsters" on election night.

Labour leader Jacinda Ardern, National leader Judith Collins, New Zealand First leader Winston Peters, and Green co-leaders James Shaw and Marama Davidson.

The complexities of human psychology mean political parties can take nothing for granted, Josh Van Veen writes. Photo: RNZ

In November 1993, opinion polls foretold a comfortable victory for the incumbent National Party. But there was no clear outcome on election night. For a brief moment, it appeared that the Labour Party of Mike Moore could reclaim power with support from the new left-wing Alliance. The upset led then-prime minister Jim Bolger to exclaim, "Bugger the pollsters!" To his relief the final count gave National a one-seat majority.

No caption

Jim Bolger, Photo: RNZ/Rebekah Parsons-King

Twenty-seven years later, polling suggests that Jacinda Ardern is on the cusp of forming her own single party majority government. Bolger was the last prime minister to enjoy such a mandate. The 1993 general election ushered in a new era of multiparty politics. A succession of coalition and minority governments would follow - right up to the present. But this era could soon be over.

At the time of writing, Labour is projected to win more than the 61 seats needed to govern alone. Statistician Peter Ellis calculates a 0.1 percent chance that National can form the next government. These numbers may sound fanciful, whatever your politics, but they are based on highly credible data from the country's two most successful polling companies.

In the past nine months, 1News/Colmar Brunton and Newshub/Reid Research have released a total of seven polls between them. They have told more or less the same story. In the aftermath of the first lockdown, support for Labour reached historic levels, while National collapsed to under 30 percent. Act has surged, the Greens are perilously close to the threshold, and NZ First languishes around 3 percent.

With Labour ahead by such a wide margin, it appears that the election is more or less a foregone conclusion. But is it really? In 2017, the final Reid Research poll had an average discrepancy of just 0.7 percentage points when it came to estimating support for the main parties, compared to the final result. Colmar Brunton and Roy Morgan were out by an average 1.4 and 2.7 points respectively.

While these differences are usually within the reported margins of sampling error, a percentage point or two can be crucial. If, for example, National had maintained its election night support of 46 percent in the final count it is quite possible that Bill English would still be the prime minister. That is why polls are more useful for reading trends than making predictions.

Late deciders overlooked

In 2020, commentators and journalists have dismissed the possibility of a National victory. The received wisdom is that most voters have now made up their minds and the next month is unlikely to see much change in public opinion. But this overlooks the number of undecided and wavering voters. In the 2017 NZ Election Study, for example, around 20 percent reported making up their minds during the final week (including election day itself).

In the last Colmar Brunton poll, 10 percent of the respondents said they were undecided and 4 percent refused to answer. The headline results (e.g. Labour 53 percent) are calculated by excluding those respondents who either "don't know" or refuse to say. If we did include the undecideds in the base of the calculation for party support then Labour would be on 47 percent. Those undecided voters could at least determine whether or not Labour governs alone.

Furthermore, it is impossible to know how committed individual respondents are to voting a particular way - or even voting at all.

Although respondents are asked "how likely" they are to vote, neither Colmar Brunton nor Reid Research take into account the effect of non-voting. In other words, no assumption is made about the probability someone will vote based on their demographic profile. This means that while their samples are representative of the general population, it is difficult to know how representative they are of the voting public.

Turnout rate can affect result

Some are a lot more likely to vote than others. For example, over-70s had a turnout rate of 86 percent in the last election compared to only 69 percent for 18-24-year-olds. It is possible that unrepresentative sampling of certain age groups might explain historic discrepancies between polling and real support for NZ First and the Greens. Last time, Colmar Brunton underestimated support for NZ First by a significant 2.3 points, while Roy Morgan overestimated Green support by 2.7 points.

The reported margin of sampling error typically means that we can be 95 percent confident a poll is no more than "plus or minus" a few percentage points from true public opinion. However, that figure refers to a result of 50 percent. In the Colmar Brunton example above, the margin of error for NZ First was approximately 1.4 percentage points. In other words, the poll was dodgy. This is said to happen five times out of 100.

But the margin of sampling error does not measure other possible sources of error such as interviewer effects and question wording. There is also the problem of how reliable those surveyed are. In 1992, after polls failed to predict a Conservative victory in Britain, an inquiry found that some respondents had probably lied about their voting intention ("the shy Tory factor"). Such effects are impossible to quantify.

However, more recent experience from Britain (2015) and the United States (2016) suggests that systematic polling error is most likely to result from assumptions regarding turnout. To a large extent, polling for the 2016 Presidential election failed to register Trump support in the so-called "Rust Belt" states because pollsters did not sample enough non-college educated white voters.

After the 2015 British general election, an independent review determined that pollsters had significantly undersampled over-70s. This was at least in part down to the use of online panels such as that employed by Reid Research to supplement its telephone sample. Interestingly, some evidence was also found that those people most likely to answer the phone were much less inclined to vote Conservative.

US President Donald Trump

Pollsters in 2016 failed to pick up Donald Trump's support in the so-called "Rust Belt" states - a factor in his surprise victory. Photo: AFP

The fact that Colmar Brunton and Reid Research make no assumptions about turnout could be a strength. But in the end, polling is not an exact science. No survey design can fully capture all the complexities of human psychology and voting behaviour. There will always be a degree of uncertainty. The extent to which any given poll is right or wrong may in fact come down to how it is reported and framed by the media.

To better inform the public, TVNZ and Newshub should report the estimated range of party support rather than a single figure. They could also disclose the response rate (likely to be under 30 percent), and provide a full disclaimer about the limitations of polling. But that would mean less sensationalism.

So, can we trust the polls? The answer will just have to wait until election night.

This column was written for the Democracy Project which promotes critical thinking, debate and engagement in politics.

*Josh Van Veen is former member of NZ First and worked as a parliamentary researcher to Winton Peters from 2011 to 2013. He has a Masters in Politics from the University of Auckland. His thesis examined class voting in Britain and New Zealand.

Get the RNZ app

for ad-free news and current affairs