Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today. Joe Biden was forecast to win the election comfortably with Continue Reading
Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.
Joe Biden was forecast to win the election comfortably with wide margins. And while he did pull out a victory—claiming 306 electors—the state-level results are much closer than expected: The Democratic nominee was forecast by Nate Silver’s FiveThirtyEight to win Wisconsin by 8.3 percentage points and Pennsylvania by 4.7 points, however, he only took the states by 0.6 points and 1.2 points, respectively.
So just how far off are pollsters and forecasters?
Now that most states have certified their election results, Fortune calculated 2020 model/polling errors. The difference between a battleground state’s projected margin and the final result is its “polling error.”
We found that polling errors this cycle are on par with 2016. In the 14 battleground states, FiveThirtyEight was off by an average margin of 4.1 points in 2020.* In those same battleground states, FiveThirtyEight was off by an average margin of 3.4 points in 2016. Talk about Déjà vu.
This time around the backlash to pollsters is likely subdued by the fact the forecasted 2020 winner (Biden) came out on top. Despite similarly big model errors both years, FiveThirtyEight only got the winner wrong in two states this year: Florida and North Carolina. In 2016, that happened in five states: Florida, Michigan, North Carolina, Pennsylvania, and Wisconsin.
And it wasn’t just FiveThirtyEight. The Economist model was off by an average margin of 4.5 points in 2020 battleground states, according to Fortune calculations. RealClearPolitics poll averages were off by an average of 2.9 points in 2020, compared to an average miss of 3 points in 2016. The industry goal is to keep those figures under 2 points.
Almost without exception, 2020 model and polling errors were in Trump’s favor. The Republican outperformed FiveThirtyEight’s forecast in all 14 battleground states with his biggest surprises coming in the Midwest. The largest error came in Wisconsin where Trump outperformed the model by 7.7 points. That was followed by Ohio, where Trump won and outperformed the model by 7.4 points, and Iowa, where the President pulled out a win following a 6.8 point swing in his favor.
The President’s strong performance in the Badger State might have even been a shock to the Trump campaign. In the final weeks of the campaign, Trump was pulling ads in the state and moving money to Pennsylvania. In the end, Biden’s Pennsylvania win (1.2 points) was twice the size of his Wisconsin win (0.6 points).
These polling errors—especially in the Midwest—are eerily similar to four years ago. In fact, FiveThirtyEight’s three biggest battleground misses in 2020 are the same three states where it had the largest model errors in 2016: Iowa (6.6 points), Ohio (6.2 points), and Wisconsin (6.0 points).
Following the big battleground misses in 2016, a study commissioned by the American Association for Public Opinion Research concluded polling—which feeds models like FiveThirtyEight—had missed some of Trump’s white working class vote by not weighing survey populations by education. Heading into 2020, many pollsters revised their methodologies to account for that error. But the similarly big 2020 polling misses show that alone didn’t fix it.
So what exactly is driving the errors?
A leading theory: Trump supporters are simply less likely to respond to pollsters. The President has repeatedly called unfavorable polls “fake news.” Supporting this theory is the fact that in recent years telephone polls have seen their response rates decline. Dan Wagner, CEO of Civis Analytics and former chief analytics officer for Barack Obama’s 2012 election campaign, told Fortune that these non-responding Trump supporters are likely contributing to the polling errors. To help account for the issue, Civis Analytics, which does work for Democratic campaigns, increased its investments in outreach methods and statistical controls prior to the election cycle.
The title for most accurate pollster this cycle goes to IBD/TIPP, which had Biden up 4 points nationally. As of Tuesday, Biden leads Trump in the popular vote by 4 points, well under the RealClearPolitics national poll average of 7.2 points. Raghavan Mayur, president of TechnoMetrica (IBD’s polling partner), told Fortune they were successful by not relying on one single polling method, instead using a combination of outreach by mobile phone, landline, and online.
So what’s the deal for future election cycles? Expect pollsters to do some serious soul-searching and evaluations of their methodologies.
*Battleground states as determined by Fortune includes Arizona, Florida, Georgia, Iowa, Maine, Michigan, Minnesota, Nevada, New Hampshire, North Carolina, Ohio, Pennsylvania, Texas, and Wisconsin.