Translate

lundi 11 novembre 2024

So, did the polls win finally?

 Hi,

I think it is important to look at the polls not only when they are wrong but also when they are right. It is even more important in the U.S. since the polls were considered wrong in the two last presidential elections. However, in  2016, their prediction of the vote for the two main candidates was very good. The problem that people see is that they did not predict the winner or, at least, the aggregators did not predict the winner. In 2020, although the winner was well predicted, the polls, particularly the web opt-in polls, seriously overestimated the support for Joe Biden.

What is the situation in 2024? In order to have an accurate assessment of the poll performance, we need to know the final results. These results will not be known before some time. The results of the vote in California for example may take up to three months. Therefore, we need to estimate these results. Two estimates were made public, one by Michael Elliott (50.1% for Trump and 48.3% for Harris) and one by Nate Silver (49.9% for Trump and 48.4% for Harris). I took the average of these estimates and I computed the share of each candidate on the two-party share in order to compare likes with likes Therefore I compare the two-party share in the polls to 50.8% for Trump and 49.2% for Harris. The following graph shows the estimates from all the polls that were conducted starting 10 days before the election, on October 25. The order of the polls in the graph is the order of the start of their fieldwork, from the first polls on the left to the last ones on the right.

The vertical bars show the confidence intervals of each poll's estimate. There are 23 polls conducted from October 25 to election day. When the polls were tracking polls, we took the estimates only once every tracking period, which means that for a 3-day tracking, we would take the estimates on day 4 (for the first three days), then on day 7, 10,  13, etc. When the pollsters asked two questions, one with all the candidates and one about only the two main candidates, we took only the one with all the candidates.

The graph shows that, on 25 polls, only four had estimates that are outside the margin of error, that is, the two Morning Consult polls, the ABC_Ipsos poll and the Marist poll, this one being very close however. The margin of error of the Morning Consult polls is particularly small because the polls have more than 8,000 respondents.


Another important point is that not all the estimates are lower than the Trump vote. In theory, the polls' estimates should spread on both sides of the population parameter -- the vote. Three of them are higher than the Trump vote, all conducted by the same pollster, Atlas Intel. Some would say that there is not much variation, or at least less than we may expect but this lower variation can be explained by the weighting used, particularly weighting by voter recall or political identification, a practice that is more current in this election than in previous ones. Notice that weighing using voter recall is current in Europe.

Finally, there is no justification to examine differences according to mode since most of the polls are within their margin of error. In addition, my last post before the election showed that there was no difference in estimates between mode combinations. This is important since there was a difference between modes in the two last US presidential elections, a particularly substantial one in 2020 (Durand, 2023; Durand & Johnson, 2021).

Does this mean that the web pollsters -- and the others too -- improved their methodology? We certainly hope so and we are going to look into it as much as possible in the next months, transparency helping.

Best,

Claire Durand,


Note: I want to thank Shawn Leroux and Luis Pena Ibarra for their help in collecting the data for this research.



lundi 4 novembre 2024

I think the pollsters will win this election

Hi,


With all the polls in, as everybody else, I come to the conclusion that the polls cannot predict a winner in this election This means that that the only way for polls to be wrong would be that the difference between Kamala Harris and Donald Trump is substantial to the point that many polls would be outside their margin of error. This is unlikely because there is not much difference between pollsters.

And even then, such a difference could be due to people changing their minds -- changing preferences, deciding to go vote although they were not supposed to, etc. In order to conclude that this happened, it would be necessary to examine whether we can show that last-minute shifts explain the discrepancy between the polls and the vote. This means pollsters would have to recontact the respondents of their last poll to ask them whether they voted, for whom and when they took their decision.

The following graph shows the forecast of the vote. It shows a perfect equality. Although I have seen such equality in other elections in Canada, I checked preceding US presidential elections and I do not see such equality.  All of the polls of the last week vary between 48% and 52% for one or the other candidate. However, voting intention for Harris is often higher than for Trump.

In addition, there is still a tendency for web op-in polls to estimate support for Harris higher than the other polls but the difference is quite small -- about half a percentage point --, as we can see in the following graph. So there does not seem to be much difference left between pollsters that could be explained by modes of administration and sampling sources. It may be -- and it is expected -- that pollsters improved their methods since 2020 and 2022.


 

However, it is interesting to notice that, in the last week, the proportion of polls conducted using web opt-in is lower than in previous weeks. There are eight web op-in polls during the last week compared to 9 mixed mode and quasi-random polls like live interviewer polls and Web probabilistic polls. Eleven of these polls put Trump ahead of Harris or at par with her. Five of them were conducted using web opt-in and five using quasi-random single mode polls.

 

 

 

 In conclusion

Right now, there are no reasons to believe  that polls will be wrong or that some polls will be wrong because all the polls say about the same thing.  Some attribute this situation to herding (pollsters aligning their estimates to other pollsters) but it is way more likely that it is explained by the weighting procedures used, particularly those that use voter recall or party id. 

Personnally, I am quite surprised that, in such a situation, the sample sizes remained rather small. I guess polls that are conducted in the swing states are more important for the pollsters and for the media if they want to predict the final winner.