As America’s pollsters and polling aggregators conduct their postmortems on the 2024 presidential election, some are already saying that their pre-election surveys got it right, noting the close results in each of the seven presidential battlegrounds.

But there’s a key point that can’t be missed: The polls once again understated the depth of support for President-elect Donald Trump, despite the many changes pollsters made after their 2020 and 2016 misses.

To be sure, this year’s polls overall seem to have missed the mark by less than they did four years ago, and the outcome in the swing states was close enough to be within the margin of error for a considerable number of polls, according to an NBC News Decision Desk analysis.

Donald Trump at a rally on April 2 in Green Bay, Wis.

But the missing Trump supporters in public polls meant that pre-election polling averages did not show Trump’s sweep of the swing states, which is why the outcome felt like such a surprise to some, though we probably shouldn’t have been surprised.

NBC News compared Trump’s support in “likely voter” polls conducted in October and November to the percentage of votes he received in the election at the state and national levels. The pattern is similar to what we saw in pre-election polls in the previous two presidential elections: The average poll understated Trump’s support almost everywhere, and in the seven swing states, the miss was consistently between 2 and 3 percentage points.

Pollsters can take some comfort in the fact that polling averages in state-level presidential races did slightly better than they did in 2020 — perhaps suggesting that polling adjustments helped limit the overall polling error. In an analysis of all public reported polls in the last two weeks of the 2020 election, those pre-election polls conducted in the last two weeks understated Trump’s support by an average of 3.3 percentage points compared to the final results. In 2024, the polls conducted in the last two weeks understated his support by 2.4 points on average.

The chart below shows the gap between the polls and the results nationally, as well as the number of polls conducted in October and November.

This underestimation of Trump’s support ran through the polls in states across the political spectrum, from solid Democratic states (New York, 4.6 points too low), to states supposedly trending “purple” in Democrats’ dreams (Texas, 4.4 points too low), to solid Republican states (Wyoming, 5.8 points too low), to swing states (Nevada, 2.9 percent too low). That’s despite pollsters’ attempts to account for the difficulty of getting Trump supporters to respond to their surveys.

Whether the polls suggested the correct “vibe” about the race depends on where you look. Many polls in the critical swing states — Arizona, Georgia, North Carolina, Michigan, Nevada, Pennsylvania and Wisconsin — suggested either a tied race or a race with a 1-point margin. Given the margins of sampling error and other sources of error in polling, those results left ample room for the final results to swing one way or the other. But the closely split polls caused surprise among some when Trump ended up sweeping all seven states.

Most polls correctly pointed to Trump winning Arizona, Georgia and North Carolina, but did not show him leading in Michigan, Nevada, Pennsylvania or Wisconsin. And even when swing state polls correctly identified Trump as leading, a sizable number of surveys under-predicted his support by more than the margin of error.

Trump led in 85% of the Arizona state polls, for example, but 36% of the surveys in Arizona understated his lead by more than the margin of error.

Reasons for the divergence

So, what happened? Two culprits seem likely.

First, it’s possible that the polls — once again — failed to capture enough new voters or voters who changed their votes from backing Biden in 2020 to backing Trump in 2024. Trump may have once again mobilized voters who were willing to cast ballots but unlikely to talk to pollsters, just as we saw in 2020.

If the people who respond to polls differ in their views from people who do not, especially in ways or to a degree that pollsters do not anticipate, then it is difficult — if not impossible — to measure public opinion. Voters who feel disrespected or misunderstood when they share their views with journalists or pollsters may choose to simply avoid them altogether.

Second, pollsters trying to estimate what they thought the 2024 electorate would look like may have just made wrong assumptions, and that easily could have caused a polling error like the one we just saw. This illustrates a difficulty unique to pre-election polling: the need for pollsters to adjust their data to match what they think the electorate will be, without knowing whether those adjustments are correct.

But if the 2024 electorate changed in ways that were not accounted for pollsters’ assumptions — the adjustments would have been inadequate.

In 2024, for example, many pollsters started weighting to make the self-reported past vote of respondents (that is, whether people said they voted for Trump or Biden in the last election) match the 2020 outcome. This is a statistical correction that pollster used this cycle to address the previous undercount of Trump voters. In weighing polls to match the popular vote from four years ago, pollsters were assuming that 2020 Biden voters and Trump voters would vote at the same rate in 2024 — but if Trump voters were slightly more likely to vote and Biden voters were slightly less likely to vote, that could easily produce a 2-point understatement in Trump’s support.

Where do we go from here?

Nate Silver recently opined that “Polling Is Not the Problem.”  We agree. The problem is not polling, but how polls are presented and interpreted. In fact, it is remarkable that a pollster can talk to the 800 people who agree to take a poll and get to a result within a few points of the outcome of an election in which nearly 150 million votes were cast.

Problems occur because people want polls to do more than they actually can — such as deciphering who is leading a tight race or identifying small changes in a candidate’s support. The reason people think polls can do this is because polling results are often presented and discussed in ways that suggest that they are more precise than what is actually possible, often in graphics showing candidates separated by mere decimal points. Those averages and to-the-tenth-of-a-point graphics paint a false picture of precision.

While journalism about polls was better in 2024 than the past — more coverage mentioned the margin of error alongside the results of polls — much of the media discourse still gave the impression of polling as a surgically precise instrument for dissecting political events and campaigns. In fact, polling is more akin to a butter knife than a scalpel — you can get close, but even with skill you need some luck to get it right.

Another problem making it difficult to interpret pre-election polling is that most pollsters avoid disclosing how they collect and adjust the data. Without knowing where the data comes from — and especially how pollsters have adjusted and weighted their data in the hope of trying to predict what the electorate will look like — it becomes very hard to evaluate or compare the results. When reasonable decisions about weighting can move the margin of a poll by as much as 8 points, it is impossible to know how much the results reflect the decisions of voters, pollsters, or both.

Given these concerns and the polling errors we once again saw in 2024, should we dump pre-election polling (as Gallup and Pew Research Center have done)? While tempting, this isn’t the right course. Properly understood, pre-election polls can play an important role in democracy by providing a sense of what outcomes seem possible. The fact that most polls showed a tight race in 2024 highlighted the possibility that either candidate could win and perhaps helped increase public acceptance of the results.

But things do need to change. In addition to pushing for more transparency — as industry organizations such as the American Association of Public Opinion Researchers have pushed — it’s important to take a more humble perspective on what we can learn from a poll. Polls can help identify which issues are more or less important to voters, but they will always struggle to identify winners in 1 to 2-point races. And in a highly polarized nation, those are often the kinds of races we have.

It’s also important for pollsters to be more transparent about their choices affecting the reported results. Even if pollsters want to focus attention on what they think the best estimate is, based on their knowledge and skill, it seems prudent to show how other reasonable choices matter. Given the impossibility of knowing which decisions are best until after the fact, it is important to know if different, reasonable, decisions produce dramatically different estimates.  Seeing how the results change under various plausible possibilities — e.g., high Republican and low Democratic turnout, or vice versa — could help better convey a range of outcomes that could happen.

Pre-election polling is hard. That’s not an excuse — it’s a reality. Treating pre-election polls as revealing deep, knowable truths without acknowledging the uncertainty inherent in those polls risks mistaken interpretations, media cycles driven by specious numbers, and the loss of public faith in the judgement and expertise of those involved in polling and analysis.

While the polls did better in 2024 than 2020, and pollsters can credibly say that their surveys were in the same ballpark as the outcome, we are still asking too much from too blunt a tool. Instead, we should consider how we can use pre-election polls in ways that convey the electoral possibilities at play and better describe the uncertainties involved.

This article was originally published on NBCNews.com

Share.
2024 © Network Today. All Rights Reserved.