Hi, here is my last campaign post
Did the polls accurately predict the election results? That depends on which party you’re talking about and in which province or region. In this post, I review the gaps between estimations and results for Canada as a whole, for Ontario and for Quebec. I then look at whether there were significant differences depending on the methodology used.
Let’s start with Canada. Polls had estimated that the Liberal Party of Canada (LPC) would win a plurality of the votes with a little over 43%. That is exactly the result it achieved—spot on—at 43.7% (update at 4PM today). However, for the Conservative Party of Canada (CPC), polls estimate its vote share at 39%. It was underestimated, securing 41.3% of the vote. As for the New Democratic Party (NDP), its voting intentions were estimated at nearly 8% support. It ended up with 6.3% of the vote. The underestimation of the Conservative vote and the overestimation of the NDP vote are fairly “classic”, but at these levels, these biases appear to have had consequences for the number of seats each party won. I will talk about the vote for the Bloc Québécois when presenting the charts for Quebec. Furthermore, I am not discussing the two very small parties—the Green Party and the People’s Party of Canada—whose vote share and polling estimates sit at about 1%.
The second chart shows the evolution of voting intentions in Ontario and the actual results. The vote for the Liberal Party of Canada (LPC) is slightly underestimated in Ontario. It was pegged at about 47.5%, and the party secured 48.8%. By contrast, the vote for the Conservative Party of Canada (CPC) was more sharply underestimated. It had been projected at around 40%, though it seemed to be edging upward late in the campaign; the party ultimately captured 44% of the vote. This underestimation goes hand in hand with the overestimation of the New Democratic Party (NDP), whose support was estimated at 7.5% but who garnered barely 4.9% of the vote. One can infer that the national underestimation of the Conservative vote is largely due to Ontario, since the province accounts for 38% of all voters.
Finally, the third chart shows the evolution of voting intentions in Quebec and the actual results. The Liberal vote is underestimated in Quebec—42.5% versus a 40% projection. By contrast, rather than an underestimation of the Conservative Party as in Ontario, we see an underestimation of the Bloc Québécois. The Bloc secured 27.7% of the vote, whereas polls had given it barely over 25%. Nevertheless, the chart indicates that polls showed Bloc support rising at the end of the campaign, and that rise may have continued through to election day. The Bloc also seems to have benefited from the New Democratic Party’s weakness: the NDP captured 4.5% of the vote, compared with a forecast of roughly 6%.
What conclusions can we draw?
Polls appear to have offered a reasonably accurate picture of voting intentions for the Liberal Party of Canada (LPC). However, we may have relied a little too heavily on the apparent stability of those intentions. The gap between the polls and the actual CPC vote is significant and falls outside the margin of error for many pollsters but it is small. Shifts toward the Conservative Party of Canada (CPC) in Ontario and
toward the Bloc Québécois in Quebec may have occurred during the final days of the campgaign and could explain
at least part of the gap between the polls and the vote.
What about the methods?
The next chart displays the range of poll estimates during the final 10 days of the campaign, broken down by mode of administration. First, polls conducted with IVR (automated telephone) produced a higher estimate for the Conservative Party of Canada (CPC). Second, some polls differed significantly from others using the same mode: one Mainstreet poll and one Innovative poll gave too low an estimate for the Liberal Party of Canada (LPC), and the Innovative poll also gave the CPC an estimate that was too high. Finally, an MQO poll offered too low an estimate for the CPC. The chart also shows greater variation (a wider box) in LPC support estimates from web polls and, to a lesser extent, in CPC estimates from IVR polls.
Because the CPC is the party whose support was most poorly estimated, I will focus on that party. First, let’s look at the chart showing the evolution of CPC voting intentions by survey administration mode. It should be emphasized that several methodological differences are intertwined with the administration modes, so that we cannot attribute the discrepancies strictly to the mode itself. The chart indicates that only a few polls put the Conservative support above 40%, and these polls were mainly conducted with the automated telephone (IVR) method. Moreover, the trend lines are not identical.
A scientific way to dig deeper into the data is to ask which polls produced estimates that fall outside their margin of error. I restricted the analysis to polls from the last seven days of the campaign so that each poll could be clearly identified. I present these polls by mode, without identifying the firms, because it is entirely possible for a pollster to be very rigorous yet still produce a bad estimate—something expected one time out of twenty. The blue dotted line represents the actual vote share in the election, and the vertical bars show each poll’s estimate, including its margin of error. When a vertical bar does not touch the blue dotted line, the poll lies outside its margin of error. There were 21 polls published during this period, and eight of them produced estimates outside their margin of error—an unusually high number. The chart shows that two of the nine IVR polls were outside their margin of error, compared with four of the seven web polls and two of the five mixed-mode polls. IVR polls, therefore, offered better estimates.
In conclusion
Polls still provided voters with relatively reliable information: they predicted the Liberal Party of Canada (LPC) vote fairly accurately, foresaw that it would secure the plurality of ballots cast, and indicated it was the most likely party to form the next government. However, pollsters, the media, and aggregators—echoed by certain political parties—may have conveyed too strongly that the LPC was “far ahead” and assured of victory. This perception may have prompted some voters to cast a strategic ballot or to return to their “old loves,” notably to the voting intentions they held last autumn, when support for the Conservative Party of Canada (CPC) was nearly 25 points higher than for the LPC. This is an hypothesis however. In order to validate it, we would have to conduct more analyses and a post election poll.
Credit to Anthony Pelletier for the data entry and chart production.
Methodology:
The methodology I use differs from that of aggregators in two respects. On the charts, each point represents a pollster’s estimate of voting intention, plotted at the midpoint of its field period. The lines are estimates of voting intentions produced using local regression. By contrast, for tracking polls, I record them only once during the period. Essentially, if the field period spans three days, I enter the estimates once every three days to avoid redundancy in the data, which would be statistically inappropriate. Aggregators tend instead to record the estimates daily but assign them a reduced weight (one-third per day, for example). In addition, when drawing the trend lines of voting intentions, the local regression assigns less weight to poll estimates that lie farther from the others. This procedure smooths the trend and counterbalances polls whose estimates diverge from the rest.
Aucun commentaire:
Publier un commentaire