Hello,
This should be my last message of the campaign. A number of commentators, analysts, and aggregators have discussed shifts in voting intentions over the past few days. In this message, after presenting a first series of charts showing voting intentions throughout the campaign for Canada as a whole, Ontario, and Quebec, I will analyze the differences that emerge depending on the polling methods used by survey firms. These methods do not paint the same picture of either the current voting intentions or their evolution over time.
First, the portrait of voting intentions
(Methodology presented at the end of the message.)
Canada
The first chart shows voting intentions for the various parties across Canada. It confirms that support for the Liberal Party of Canada (LPC) and the Conservative Party of Canada (CPC) has narrowed somewhat. We would now be at 43% for the LPC and 39% for the CPC. Support for the New Democratic Party (NDP), the Bloc Québécois, and the Green Party does not seem to have shifted much nationwide.
L'Ontario
The next chart illustrates the evolution of voting intentions in Ontario. Here too, we see a narrowing between the LPC and the CPC. The LPC would now be at 48%, its level at the beginning of the campaign, while the CPC would be at 40%, one point higher than at the start of the campaign.
Quebec
Voting intentions appear to have shifted a bit more in Quebec than in Ontario. The LPC would have lost about 3 points (from 42% to 39%), to the benefit of both the Bloc Québécois and the CPC, which are now tied at 25%.
But does the methodology used have an impact?
I first conducted a statistical analysis (regression) that shows that, all else being equal, polls using the automated telephone method (IVR) — 24 polls, 31% — give, on average for the whole campaign, 2.7 points more to the LPC than web-based polls (n=33, 42%). Similarly, polls that combine multiple modes (telephone + web, SMS + web, etc.) — n=21, 27% — give 1.6 points more to the LPC than web polls.
The following chart shows the evolution of LPC support according to the polling method used. It highlights not only differences in the level of support but also in the trend over time:
-
As mentioned, web polls show lower support for the LPC. However, it is also important to observe the evolution: web polls (in burgundy) show that LPC support rose until mid-campaign and then declined, ending just slightly above 40%.
-
In contrast, IVR polls (in red) show complete stability at the beginning of the campaign, followed by a decline. According to these polls, LPC support would be slightly above 42%.
-
Finally, polls using mixed modes (in purple) show no change in LPC support since the start of the campaign, with support just under 44%.
Of course, there are fewer polls for each mode than for the full set, but the trends seem fairly clear, despite considerable variability in the estimates from various firms using web and IVR methods. The next three charts illustrate these differences, showing the evolution of voting intentions according to each mode.
Voting intentions according to web polls
According to web-based polls, support for the two main parties is almost tied. That said, they were already very close at the beginning of the campaign and have barely moved closer together, by about one point, according to web polling.
Voting intentions according to IVR polls
Looking at automated telephone (IVR) polls, support for the two main parties has moved significantly closer since the start of the campaign. The gap between them would have shrunk from 10 points to less than two points, due to a four-point gain for the CPC, a one-point gain for the NDP, and a three-point loss for the LPC.
Voting intentions according to mixed-mode polls
According to polls using mixed modes, there has been no change in voting intentions across Canada during the campaign.
In summary, depending on the mode, the Liberal Party (PLC) stands at 40.5% (Web), 43% (IVR), or 44% (Mixed), while the Conservative Party (PCC) stands at 38.5% (Web), 40% (IVR), or 37.5% (Mixed).
Conclusion
We are fortunate to have several pollsters using different methods during electoral campaigns, which allows us to obtain sometimes slightly different pictures of voting intentions — and which may ultimately help pollsters improve their methods based on the election results. That being said, while the differences are statistically significant, they are not huge. Moreover, they normally depend on factors that we may eventually be able to control better — such as the fact that some types of people prefer certain modes over others and may also have different socio-political profiles.
The differences between modes highlight the importance of not relying on a single poll to understand voting intentions and always remembering that “one poll does not make a summer”.
Credit to Anthony Pelletier for data entry and chart production.
Methodology:
The methodology I use differs from that of aggregators in two ways. On the charts, each point represents an estimate of voting intention produced by a pollster, placed at the date corresponding to the middle of the data collection period. The lines are estimates of voting intentions using local regression. However, for tracking polls, I only record an estimate once during the collection period. Essentially, if the period covers three days, I record the estimate once every three days to avoid data redundancy, which would be statistically inappropriate. Aggregators, on the other hand, tend to record estimates every day but assign them reduced weight (for example, one-third per day).
Additionally, to trace the evolution curves of voting intentions, local regression gives less weight to poll estimates that are farther from the others. This procedure smooths the trend and helps counterbalance polls whose estimates deviate significantly from the rest.
Aucun commentaire:
Publier un commentaire