An Anti-ode to Polling
Polling has achieved immortal powers in our lives, and that is frightening
I think it’s safe to say that we have reached the pinnacle of election polling’s influence.
For one thing, we are saturated with and bombarded by polling everywhere we look. Obviously, the anxieties surrounding the upcoming election has created an over-awareness and heightened sensitivity to polling and its results.
But even further than that, they’ve now achieved almost immortal powers.
In 2024, polling resulted in a major American political party forcing their incumbent president nominee to resign more than three months from the presidential election. It’s one of the most shocking and monumental events in presidential election history.
Regardless of what you think about polling, how accurate or effectual you believe it is, it can’t be argued that it doesn’t matter in the real world. Perceptions are shaped. Behavior distorted. Panic engendered.
It’s become, in a very unsettling way, a ubiquitous perpetual measure of ourselves, which we use to tend to, improve upon, perfect, groom, frame, and perceive ourselves. One poll comes out, we determine where things stand, and why, and what to do about it for the 24-48 hours before the next poll comes out, and we repeat the process over and over and over again ad infinitum.
And now it has done what was once been unthinkable: it removed a president.
Polling Removed a President
For several months, polls had driven an ever-creeping narrative that the current president, Joe Biden, needed to resign from seeking the party’s nomination to run for president again. The impetus for this was not just the idea that he was “too old”, but that the polls that included the question of “is Joe Biden too old”, or “does Joe Biden have the mental acuity to be president”, or “should Joe Biden drop out?” were showing results that suggested that a large majority of people, even in the Democratic Party, thought he was too old, not up to the challenge of being president, and should drop out.
Then the first debate with Trump happened, and we all know how that went. This supercharged these narratives, and the poll numbers regarding these questions got worse for Biden. Although, notably, the straight-up head-to-head numbers with Trump did not change very much at this point.
A couple weeks later, there was an assassination attempt on Trump, and a few days after that, the Republican National Convention, each typically good for a short-term bump in poll numbers. The polls continued to worsen for Biden to some degree, both in the specific questions regarding age and mental competency, and in the head-to-head numbers.
While all this was going on, prominent Democrats progressively went from suggesting that Biden should consider dropping out, to leaking unfavorable stories about Biden’s mental acuity and visits from party leadership where they encouraged him to drop out, to outright smearing him as dementia-riddled and threatening to go directly public with calls to step down. Predictably, his poll numbers dropped more.
Finally, Biden gave in, saw the worsening poll numbers, and apparently believed the narrative that he “couldn’t win”, and stepped down from the nomination. In this way, polls determined his outcome. Not scandal, malfeasance, or incompetence, but polling. Yes, Democrat party leadership pressured him to step aside, but what were they basing these actions on? Polling.
If the polls were not suggesting most people thought he was too old or should drop out, and getting worse for him, he likely would not have been asked to drop out. Whatever you think of the actions taken to force Biden to drop out, the fact is, polling drove the negative narrative to a head, and a major unprecedented decision was made based on them.
Ozempic time for polling
But should polling be given this much weight? They are very heavy now, indeed. Is it time to slim them down in our collective minds? When considering this, we need to look at the value of polls in our daily lives.
When we consider the value of polls, we have to consider what they are and what they mean. They are a measure of a popular sentiment about political candidates, based on a set of assumptions, at a given time and place. They might show a sentiment at that time, but they also might not. If all polls showed the same thing, we might be able to base our assessments on them with confidence. But, alas, they often do not show the same thing, as some pollsters are higher quality than others, and some are biased, and even have nefarious intentions. The accuracy of any given poll, is, therefore, hard to trust.
Polls are not “accurate”, in the sense that the real-world event they measure rarely matches the poll results. Even if a poll correctly predicts a binary outcome, the magnitude of that outcome could still be wildly inaccurate enough to render the poll practically meaningless. If you don’t believe that statement coming from someone like me, consider if the statement came from a prominent polling think tank.
A FiveThirtyEight article published soon after the 2022 midterm elections has the deceptive headline “The Polls Were Historically Accurate In 2022”. Despite the tone of the headline, the article itself is largely about how polls are not accurate, have a large margin of error, and should not be used for predictive purposes. This seems like a counterintuitive conclusion from pollsters about their own product, but really it’s not too surprising.
Pollsters tend to balance the analyses of their polls between gloating about being right and caution about possibly being wrong. Despite fairly notorious misses over the last few election cycles, polls are still widely sought after and amplified in the media and used as gospel for pundits and high-information consumers of media. They are safe products to peddle in this regard, similar to gambling. You can make all the warnings and disclaimers you want about relying on them, but no one really cares….they just keep using them and relying on them. And then, when your poll is grossly off, you can point to the disclaimer and say “well, I told you not to rely on us”.
Are polls “accurate”?
The “historically accurate” statement in that article’s headline stems from the error margin in 2021-2022 being lower than it historically has been. Here’s the article’s fairly straightforward definition of “error margin”:
the difference between a poll’s margin and the actual margin of the election (between the top two finishers in the election, not the poll). For example, if a poll gave the Democratic candidate a lead of 2 percentage points, but the Republican won the election by 1 point, that poll had a 3-point error.
Here’s a chart from that 538 article showing the error margin for all major elections from 1998 through 2022.
You can see that the error margin was indeed on the low end of the spectrum in 2021-2022, hence the somewhat deceptively worded headline of the article. Although, it was only the House races that were the lowest ever in its category, and by far; this pushed the average across all the election categories in that cycle downward so that the combined margin of error was tied for the all-time lowest as a result. The Senate and Governor error margins were only in the historically mid-to-low range. Therefore, the error margins were not particularly consistent across the election types, which resulted in a somewhat skewed combined number.
The range from lowest to highest error margin across these election categories (Senate, House, Gov.) in 2021-2022 was 1.1 (4.0 for House and 5.1 for Gov.). The election cycle which tied 2021-2022 for lowest ever combined error margin, 4.8 in 2003-2004, had a range of .5 (5.3 for Senate and 5.8 for House). The next best election cycle was 2017-2018, which had a combined error margin in 4.9, and a range of 1 (4.2 for Senate and 5.2 for Gov.). Thus, two out of the three lowest combined error margins had a significantly large range of margins across election types.
This means that their combined error margins tended to be skewed lower due to historically lower error margins for just one specific election type (House in 2021-2022 and Senate in 2017-2018). The tied-for-lowest figure of 4.8 combined margin of error in 2003-2004 could be considered the most accurate, since the range was the tightest of the three at .5…...until you consider that the general presidential election’s margin of error of 3.3 that year was the lowest in the chart’s timeframe and therefore skewed that year’s numbers; when that is factored in, the range from that election cycle goes from .5 to 2.5 (3.3 Gen. Presidential to 5.8 House), a huge difference.
All this is to say that the label of “Historically Accurate” polling in 2021-2022 is very generous, and based mostly on the historically low error margins of the House election cycle only. Like with most things in polling, the results are all over the place—haphazard with no sense of a discernable pattern. It is also worth noting that the last two presidential elections had the highest level of error during this time period. You’d think that—as the headline of this article implies—polling would be getting more accurate as time goes on, with technology improving and becoming more prolific and information more readily accessible….but you’d be wrong.
Also, do you notice how high the margins of error in general are? The most accurate has an error margin of 4 points! In the real world, this is a huge amount, especially in our current closely divided electorate. These days, if someone wins by 4 points, it’s practically a blowout. But if someone is polling 4 points better than their opponent, due to historical error margins, that means the race is essentially tied. And this is the most accurate level. The overall combined error margin over the last quarter century is 6.0!
The House election cycle of 2019-2020, the one prior to the all-time low, had an historically high error margin of 6.5. The election cycle of 2015-2016 had the 2nd-highest combined error margin at 6.8. What will the accuracy be of this upcoming election cycle? Who knows? There’s no discernable pattern or compelling reason offered in the article to think it will either be more or less accurate than the last one.
The overall combined error margin since 1998 is 6.0. This essentially means you could have thrown out any election poll in the last 26 years, including this year, in which a candidate was leading by 6 points or less, as that poll didn’t tell you much of anything.
This also means that any poll that is just beyond the margin of error, and therefore might be more meaningful to some degree, still indicates that the race is a very winnable election for the one polling behind. An 8–10-point polling lead might give the impression a blowout is imminent, but the reality is, based on average error margins, this is actually a 2–4-point lead. At this level, the trailing candidate is easily within striking distance, and probably shouldn’t give up and walk away. A great news cycle, a new campaign approach, or a newly uncovered scandal for the frontrunner could change the race significantly.
This all has the following practical meaning: unless you’re looking at a consistent double-digit blowout in the polling numbers, it’s anyone’s race at any given time. In other words, there’s very little practical value to the average consumer in election polls.
How useful is polling in calling elections?
It turns out polls are not very predictive, and should not be used to predict elections, according to the author of this article.
Consider the following chart, from the same 538 article:
You might way, “Wait, 78% is still pretty good”. Sure, one could say that. One could also say it’s not really that good, and the fact that our media class and the high-information portion of the electorate is obsessed and dependent on polling for their news stories and opinions is a problem. One could also look at the trend in the chart and notice polls are getting less predictive over time. And one could also consider what the author himself says about this hit rate and his subsequent rationalization:
But that low hit rate doesn’t really bother us. Correct calls are a lousy way to measure polling accuracy. (italics are all mine).
Suppose two pollsters released surveys of a race that Democrats eventually won by 1 point. One of the pollsters showed the Republican winning by 1 point; the other showed the Democrat winning by 15 points. The latter pollster may have picked the correct winner, but its poll was wildly off the mark. So we’d be very wary of trusting it in a future election. The other pollster may have picked the wrong winner, but it was well within an acceptable margin of error; essentially, it just got unlucky.
The polling publication itself does not believe that 78% is “good”, and labels that number a “low hit rate”. And he even dismisses the hit rate as a measure of predictive accuracy.
I’m not trying to disparage the author or 538 at all. Like many pollsters who try to caution people from reading too much into their polls, he’s speaking the truth, which is that you shouldn’t take polls too seriously or give them too much value. On the other hand, the headline touting the “historical accuracy”, perhaps an editorial decision to promote the article and publication, was definitely bordering on—if not outright—deceptive, and serves to continue the prominence of polling in our political discourse.
Here is another notable quote from the article:
Polls’ true utility isn’t in telling us who will win, but rather in roughly how close a race is — and, therefore, how confident we should be in the outcome. Historically, candidates leading polls by at least 20 points have won 99 percent of the time. But candidates leading polls by less than 3 points have won just 55 percent of the time. In other words, races within 3 points in the polls are little better than toss-ups
….polls have a worse chance of “calling” the election correctly if they show a close race. In fact, the percentage of correct calls made is simply a function of how close the polls are.
So basically, if the polling is a double-digit blowout, it accurately predicts something, and if polling is close at all, it doesn’t really predict anything. It’s no better than flipping a coin.
Keep in mind that the analyses in this article are based on polls that were conducted at 21 days or less from the elections in question. Accuracy and value also vary as a function of time. The further out you are from the election, the less your poll is accurate or valuable. It makes no sense to consider a poll in 2023, or even June or July 2024, for the presidential election in November 2024. but that’s what has been done, and is continuously being done, this election cycle. And this is what drove one major political party to split and make an historically drastic change to their ticket. Polls resulted in the removal of a president three and a half months out from an election, when they are known to be at their least reliable.
Despite all this evidence, and pollsters’ pleas to take polls less seriously and give them less value, our society takes polls extremely seriously and gives them an inordinate amount of value.
The outsized power of polling
I don’t want to give the impression that the change in the Democratic ticket was unwarranted or was in any way the wrong move. For me, the choice at the time was a matter of risk analysis. I, and many others, didn’t like the risk involved with moving to a new candidate, even though I felt that any of the potential Democratic candidates, including Kamala Harris, were highly capable and would handily win the election.
My low risk tolerance happened to be unwarranted, thankfully, but no one foresaw what is actually happening now: a quick, effortless coalescence by Democrats around a candidate that has very little national campaign strategy background, and yet has resonated with what seems to be the perfect messaging for our times.
That said, Harris’ current positive poll results should be taken about as seriously as Biden’s negative poll results. The swing in poll results from one end to the other reflects…something…perhaps—a change in mood, and level of enthusiasm not baked in before. But based on the historical error margins and lack of predictive capabilities as examined in this article they are practically meaningless, especially this far out from the election.
This does not mean that one should not be excited and feel good about where the Democratic ticket stands now. One can see and hear the enthusiasm in the raucous crowds, note the results of recent elections, and use common sense to expect the trend of Democratic electoral success will continue. Harris and Walz are positive, likeable people, and a welcome sharp contrast to the vibes that Trump and Vance and MAGA have been exuding.
Perhaps the best time to reflect on polling, and our reliance on it, is after the election and inauguration is over and MAGA is waning in importance and effectiveness.
But it is worth reflecting on and reckoning with at some point. Polling has taken on a life of its own. It’s a multi-headed monster whose power we are progressively succumbing to. Despite the encouraging way that subsequent events have played out for this election, there is cause for concern about how this will impact us and drive our decisions in the future.
I tend to think that the biggest problem with polling today isn't its ubiquity, but rather how pundits (as well as more casual observers) overinterpret the polls, something I myself am guilty of. As you alluded to, given the reality of margins of error, the polling of US presidential races can basically only tell us if the national/state picture looks competitive or not. Coming to firmer conclusions than that is dangerous, and all too common.
I think a lot of responsibility here falls on media outlets and journalists to better contextualize polls, not single out any particular poll, and to explain to readers the basic statistical concepts underlying polls. If we saw more of that, the power and allure of polling would perhaps be diminished, but in a good way, allowing for better conversations about elections.
(a) If polling could have removed Biden from the Dem ticket it would have happened months earlier. A single nationally-televised debate had a larger impact on that question than all the polling results of the previous six months combined -- for the simple reason that 50 million voters watched that debate live and could draw their own direct conclusions about his readiness for a second four-year term.
(b) Among the 90 percent of American voters who aren't political obsessives, campaign polling has been losing credibility and persuasiveness for years now. The median voter's interest in polling is at its lowest during my (long) adult lifetime, and still declining.