ADVERTISEMENT

Company News

What Looked Like a Toss-Up Turned Into a Red Wave. Did Pollsters Get It Wrong?

Published: 

(Bloomberg Businessweek) -- To anyone following the presidential election, the polls made the race out to be a real nail-biter that could take days, if not weeks, to decide. Yet here we are on the morning after Election Day, and Donald Trump has won convincingly, besting Kamala Harris in five swing states and leading her in the other two. His win is part of a broad American shift to the right: Republicans have taken control of the Senate and are poised to retain their majority in the House.

Did the polls get it wrong? Well, it certainly doesn’t feel like they got it right. But it’ll take weeks or months for the industry to do a full autopsy to know what went wrong, and just how wrong things went. 

On Election Day, polls showed an incredibly close race between Trump and Harris in every battleground state. Pennsylvania and Nevada were dead even, according to the polling averages from 538, the poll aggregation and analysis website. Averages in Michigan and Wisconsin—crucial states for Harris to hold—showed her up by only one point.

Trump ended up winning Wisconsin and Pennsylvania, yes, but it was still close. This morning, with votes still being counted, he’s up by less than 1 percentage point in Wisconsin and 2 percentage points in Pennsylvania. He also won Michigan, but leads by only 1.4 points. Those numbers, which may still change, aren’t that far off from the dead heat the polls made the race out to be.

There’s a little more daylight in the other three states. In North Carolina, where Trump was favored by a point, he won by nearly 3.5 points. He’s leading by 5 in Nevada and 4.5 in Arizona, where significant numbers of votes still remain to be counted.

It’s all a reminder that polls are often more accurate than they are precise. They show the general state of the race—toss-up, narrow lead, landslide—better than they predict the final margin. That’s in part because of their margins of error, a phrase you’ve likely heard if you spent much time soothing (or stoking) your anxiety this election season reading the seemingly endless array of political surveys. Polls use random samples of Americans as representatives of the country as a whole. No matter how perfect the samples, they can’t precisely embody the whims of more than 100 million voters. So pollsters issue a margin of error for the poll to give a sense of how likely it is that their small sample is telling an accurate story about the larger public.

What’s that mean in practice? If a poll said Harris was up 1 point in Michigan, but had a 3-point margin of error, the poll was really telling you the state of the race was somewhere between +4 Harris and +2 Trump. Looking at it through that lens, those narrow Trump wins in swing states land largely within the range of what polls showed.

Outside the swing states, polls were generally less accurate. For example, the final polling average in Florida showed Trump up 7 in Florida; he’s set to win it by 13 points. Likewise, Texas’ final polling average showed Trump up by 8; instead it’s a 14-point margin. And there was a lot of talk about the potential for Harris to steal Iowa after the state’s top pollster came out with a poll showing her up by 3. The vice president lost the state by more than 13 points.

“It will be some time before the final official vote totals are known and we can assess more precisely how the polls did across states and in comparison to 2020. But overall I think they told the right story of races too close to predict, and especially did well capturing the topics of most concern to voters, primarily the economy, abortion and immigration,” says Charles Franklin, a professor of law and public policy at Marquette University Law School in Milwaukee and the director of its poll.

In the meantime, it’s a vibe thing. “When the electorate sort of shifts, and they all shift in the same direction across all these different states, even small misses are going to feel kind of big,” says Christopher Chapp, a professor of political science at St. Olaf College in Northfield, Minnesota. 

But there’s another question to ask: Are polls really supposed to be as all-knowing as journalists and news-addled members of the public (hi!) make them out to be? “We don’t produce these polls to repeat monotonously the score of the game. We produce them to measure the concerns and motivations of voters in a way that we hope informs and even enlightens our understanding of election outcomes,” says Gary Langer, whose firm, Langer Research Associates, leads polling for ABC News.

Even if pollsters this cycle saw a “normal polling error”—as the term of art goes—what’s striking is that the industry by and large erred in the same direction, underestimating Trump’s support for a third consecutive presidential election. “There is clearly something about these elections where Trump is on the ballot, where the polls have a very difficult time reaching Trump voters,” Chapp says. “And that’s something that folks in the polling industry have been scratching their heads on since 2016, and I imagine they will continue to do so.”

In both 2016 and 2020 the polls suggested the Democratic presidential nominee was favored. In 2016, Hillary Clinton was ahead in the final averages in battleground states. And despite winning the popular vote by 2 percentage points, she lost the Electoral College to Trump 227 to 304. In 2020, Joe Biden was far ahead in the polls, but the eventual vote spread in key states was much tighter, and his Electoral College victory was 306 to 232.

This cycle, pollsters tried to avoid those misses. Many tweaked their formulas to ensure their samples were more representative of the eventual electorate. To do so, they weighted some responses more than others. Brian Schaffner, a political scientist at Tufts University, working with his student Caroline Soler, wrote last month that in this cycle, 1 in 6 pollsters had added weights for community type when none did in 2016. That was meant to ensure that enough of the rural voters who tend to vote for Trump were counted, he wrote.

Likewise, many pollsters started to include questions about whom a respondent voted for in 2020. Pollsters who weighted based on that question reported a tighter race than those who didn’t, and the polling industry will now need to decide whether to keep asking it in future election cycles.

Those conversations will really kick into gear once the final vote tallies are known. “If there were a polling error, the industry will go through the same steps it did in 2020,” said Alexander Podkul, senior director for research science at Morning Consult, before the election. “Were there shy Trump/shy Harris voters? Were people less likely to be honest on the phone? Was it something about late deciders—was there just that shift, and the error wasn’t as bad for polls close to Election Day?” (Morning Consult conducted the Bloomberg News/Morning Consult monthly tracking poll.)

After months spent soliciting others’ opinions, it’s time, once again, for the polling industry to turn inward. 

©2024 Bloomberg L.P.