Actually, you should look at margins, not vote share
When it comes to polls, 50 percent is just another number
You often hear pollsters say that a candidate’s vote share is more important than the margin between the two candidates in a given poll. That’s not always true. And it’s not at all true in a presidential race like this one, where the candidates are divided by less than 2 points across all the battlegrounds.
The race is a pure tossup, even if one candidate hits 50 percent in a state poll from time to time.
What I’ll call The 50 Percent Myth is mostly a superstition that’s grown up among gut-driven political consultants, something that has no basis in statistical probability. Pollsters will use a focus on vote share to explain away a poll that called the winner or the final margin wrong, but got the vote share for their candidate right (likely by coincidence).
From what I can tell, I’m the rare professional pollster who doesn’t put much stock in vote share—except as a byproduct of the margin and a survey’s share of undecided voters.
We’ve seen pieces about this written about recently by Carl Allen and the great Nate Moore. In many of the most famous polling errors of recent years, say, Wisconsin in 2016, polls got the Democratic candidate’s share of the vote right, but they underestimated Trump’s share. So these polls weren’t completely wrong: undecideds just broke massively for Trump at the end.
And, at first blush, this sounds like could be a plausible explanation. Things usually break for one candidate in the final weekend, which is why you shouldn’t take early October polls as gospel. In this election, the difference between the final polls and the result will probably be bigger than the movement in the polls from now through election day.
But the assumption above requires one to believe that all the undecideds broke for one candidate at the end, which simply doesn’t happen often—if ever.
The exit polls are far from perfect, but it’s worth looking at what the Wisconsin exit polls said about late deciders in 2016. The 14 percent of the electorate that decided in the last week broke about 2-to-1 for Trump, which if accurate is enough to explain a good chunk of the polling error—maybe half—but not all of it.
In 2020, late deciders had less explanatory power in Wisconsin. They broke slightly for Biden, and the polls still overestimated his margin by around 8 points.
A simpler explanation is that the pre-election polls were just wrong. That the 2016 polls hit the Democratic candidate’s vote with relative accuracy was pure coincidence. When polls with any amount of undecideds overestimate one candidate, they will automatically hit that candidate’s share of the vote more closely than the one they underestimated.
Carl Allen, author of a book called The Polls Weren’t Wrong, brings up Maine in 2020 as an example. He says that Nate Silver should not have assigned Trump any probability of winning the state—instead of the 1-in-10 chance he did—not just because Biden led by a healthy margin, but because he was polling at 53 percent. Here’s Allen’s logic:
That's because…if you get 50%, you can't lose. That's why being close to 50% is good. I know this is not a super advanced concept, but I can say with certainty that Nate Silver and other forecasters and analysts don’t understand it.
But what else was happening in Maine that year? A Senate race where no poll had Susan Collins ahead and yet she ended up winning by 8 points. What did the polls at the time say about the candidates’ vote shares?
On average, polls taken in October had Democrat Sara Gideon at 47 percent and Collins at 43 percent. The two polls that polled the rank-choice final round had Gideon at 51 and 54 percent.
What did Gideon end up getting? 42.4 percent of the vote, losing to Collins by 9 points. Gideon got 5 points less than her polling share in the first round.
Again, the polls were simply wrong. They reflected the wrong electorate or there was massive persuasion involving not just undecided voters, but Gideon voters switching to Collins. Vote share was no more informative as a metric than the margin shown by the deeply flawed polls in that race.
In the end, I’d still rather be at 50 percent than not. And I’d rather be there with a low undecided number, so there’s less than can go against me. But, the candidate leading by 51-48 is not that much more likely to ultimately win in the final vote count that one polling 47-44, assuming both polls are taken the same time away from the election. Margin of error applies about equally to both cases, whether or not one hits the “magic” 50 percent number.
Looked at another way, 50 percent is better than 48 percent in the same way that 48 percent is better than 46 percent. Polls are an imprecise measurement, and there’s nothing magical about hitting 50 percent when that the real number could plausibly be anywhere from 48 to 52 percent. Generally speaking, it’s a good sign, but probabilistically, not a surefire bet you’re going to win.
The two numbers you need: Margin and undecideds
Ultimately, the two numbers you need to look at in the topline of any poll are the margin and the undecided share. This ultimately backs out to a fixed share of the vote for each candidate, but only focusing on an arbitrary 50 percent number can be a distraction. Polling at 50 percent doesn’t rule out the possibility that you will end up with less than that and lose.
The higher the undecided number goes, the more volatile a race is and the greater the chance of a polling error. That’s why you shouldn’t only look at margin. But in a race with so few undecideds, the margin itself is about the best you can do in terms of analysis.
If there are lots of undecideds, then you need to look at who the undecideds are. And if there’s a decided partisan split in the undecideds, it’s likely the race will move in the direction of the party with the most undecideds. This is arguably what’s happened in many red state Senate races in the last few cycles, where most of the undecideds were Trump voters or Republicans. Ultimately, the results ended up looking more like the state’s underlying partisanship.
A good example of this right now is the Nebraska Senate race, where independent but Democrat-supported Dan Osborn is running close to incumbent Republican Deb Fischer. In this deep red state, Fischer is running just 1.4 points ahead of Osborn, with 43.5 percent in the 538 average versus Osborn’s 42.1. The high undecided number means there’s a lot more potential for polling error than there were if one candidate were polling at 50 and the other at 48. But, unlike a race within the margin of error with few undecideds, it’s probably easy to know which way the results will go relative to the polls: towards the Republican who benefits from running in a Trump +20 state. Getting much above 45 percent is going to prove a challenge for Osborn, as it was when this gambit was tried in Kansas in 2014 or Utah in 2022.
Vote share is not completely uninformative as a metric, but ultimately, it’s just a byproduct of margin, the share of voters who are undecided, and who the undecideds are.
Seems like you could test the hypothesis that 51-48 is not a materially better polling position than 47-44 by looking at past elections. What is the winning percentage of candidates who were polling at 51-48 compared to candidates who were polling at 47-44? If they are approximately the same, then you are correct. But if candidates polling at 51-48 win their races at a substantially higher rate than those who poll at 47-44, your hypothesis is incorrect.
Have you done this work, Patrick? What results did you discover?
Claim: 51-48 is not that much more likely to ultimately win...than one polling 47-44.
Citation needed.
Can you support this claim or did you just make it up?
Because, unlike you, I've researched this, and it's not true.
Maybe read the book you're citing before citing it. You might learn something.
So will you provide evidence to support your claim that 51-48 is basically as good as 47-44, or admit you're wrong?