Hi,
I think it is important to look at the polls not only when they are wrong but also when they are right. It is even more important in the U.S. since the polls were considered wrong in the two last presidential elections. However, in 2016, their prediction of the vote for the two main candidates was very good. The problem that people see is that they did not predict the winner or, at least, the aggregators did not predict the winner. In 2020, although the winner was well predicted, the polls, particularly the web opt-in polls, seriously overestimated the support for Joe Biden.
What is the situation in 2024? In order to have an accurate assessment of the poll performance, we need to know the final results. These results will not be known before some time. The results of the vote in California for example may take up to three months. Therefore, we need to estimate these results. Two estimates were made public, one by Michael Elliott (50.1% for Trump and 48.3% for Harris) and one by Nate Silver (49.9% for Trump and 48.4% for Harris). I took the average of these estimates and I computed the share of each candidate on the two-party share in order to compare likes with likes Therefore I compare the two-party share in the polls to 50.8% for Trump and 49.2% for Harris. The following graph shows the estimates from all the polls that were conducted starting 10 days before the election, on October 25. The order of the polls in the graph is the order of the start of their fieldwork, from the first polls on the left to the last ones on the right.
The vertical bars show the confidence intervals of each poll's estimate. There are 23 polls conducted from October 25 to election day. When the polls were tracking polls, we took the estimates only once every tracking period, which means that for a 3-day tracking, we would take the estimates on day 4 (for the first three days), then on day 7, 10, 13, etc. When the pollsters asked two questions, one with all the candidates and one about only the two main candidates, we took only the one with all the candidates.
The graph shows that, on 25 polls, only four had estimates that are outside the margin of error, that is, the two Morning Consult polls, the ABC_Ipsos poll and the Marist poll, this one being very close however. The margin of error of the Morning Consult polls is particularly small because the polls have more than 8,000 respondents.
Another important point is that not all the estimates are lower than the Trump vote. In theory, the polls' estimates should spread on both sides of the population parameter -- the vote. Three of them are higher than the Trump vote, all conducted by the same pollster, Atlas Intel. Some would say that there is not much variation, or at least less than we may expect but this lower variation can be explained by the weighting used, particularly weighting by voter recall or political identification, a practice that is more current in this election than in previous ones. Notice that weighing using voter recall is current in Europe.
Finally, there is no justification to examine differences according to mode since most of the polls are within their margin of error. In addition, my last post before the election showed that there was no difference in estimates between mode combinations. This is important since there was a difference between modes in the two last US presidential elections, a particularly substantial one in 2020 (Durand, 2023; Durand & Johnson, 2021).
Does this mean that the web pollsters -- and the others too -- improved their methodology? We certainly hope so and we are going to look into it as much as possible in the next months, transparency helping.
Best,
Claire Durand,
Note: I want to thank Shawn Leroux and Luis Pena Ibarra for their help in collecting the data for this research.
Aucun commentaire:
Publier un commentaire