Thinking retrospectively about the West Coast countercultural movement of the 1960s, Hunter S. Thompson famously wrote that “you can go up on a steep hill in Las Vegas and look West, and with the right kind of eyes you can almost see the high-water mark — that place where the wave finally broke and rolled back.”
Picture instead D.C.’s Capitol Hill in 2020 and he might have been describing the much-ballyhooed “blue wave.”
Joe Biden will take the oath of office in January. He’s likely to garner more than 300 electoral votes, but the legislative branch he’ll work with will be far less blue than Democrats expected. A 50-50 split is now the best case scenario for Senate Democrats. In the House, Nancy Pelosi’s majority will be noticeably slimmer.
How has the business of election prognostication been caught off guard again? Mr. Biden’s victory was solid, but taken as a whole the results of the election have been no grand triumph for his party, suggesting that we’ve hardly improved at evaluating election outcomes over the last four years.
Chalk up the blame for Election Day surprises to two major factors: what data we choose to rely on, and how we rely on that data. Both are in need of an update.
I’m not here to bury polling. It is not going away, nor should it. Many results this cycle actually fell within the forecasted margin of error. But the egregious misses were legion. An ABC/Washington Post poll on Oct. 28 overshot Mr. Biden’s final tally in Wisconsin by 16 points. Susan Collins won her Senate in Maine seat by 9, when most polls had her healthily behind. Misses like these aren’t just wrong — they mislead the public and undermine the legitimacy of the industry.
The media similarly let the public down this cycle with their misunderstanding of polls and their sensationalized blue wave narrative. That possible outcome should have been situated in a more complete context in which room for error is front and center and polls are just part of the picture. Data analysts and journalists — everyone compiling data, presenting it and covering it for the general public — will need to work together in the future to make that happen.
Election stakeholders (all of us) need to re-evaluate our relationship with polling. Various pollsters have argued that their methods and adjustments since 2016 make them superior — the best in the business. But let’s take another step back. Is polling even the best way to understand what’s happening? Maybe it is, but I’m not convinced that given a blank slate, the polling business would be able to justify the outsized role it plays in our national conversation.
Consider the many drawbacks of traditional polling, along with the models and forecasts which rely upon them. They are strictly quantitative, point-in-time measurements relying on top-down question-and-answer tabulations. They are slow to administer and can be obsolete the moment that big news breaks. Polls don’t speak to whole audiences to gain context; they simply ask questions. Focus groups, by contrast, offer a much deeper context of voters attitudes and should be a larger part of the analysis landscape.
And beyond traditional focus groups, there are different and richer sources of data. The emerging field of “social intelligence,” for example, treats publicly available social media posts pertaining to the election as one giant data set. Artificial intelligence can classify posts by their level of favorability toward an issue or candidate, and continuously monitor (anonymized) individual accounts to track changes in sentiment. Monitoring these trend lines in public opinion can be revelatory.
In Michigan, when polls showed Sen. Gary Peters running far ahead of John James, social intelligence showed Mr. James running far ahead of Mr. Peters in terms of social media support, suggesting that the race might be closer than polls indicated. It was. In North Carolina, polls didn’t show any dents in challenger Cal Cunningham’s lead after a sexting scandal — but social media listening suggested that voters were very much paying attention. Incumbent Sen. Thom Tillis kept his seat.
The different process yields a totally different product than polling. The results are, first of all, both qualitative and quantitative. They organically show what people are talking about, and with what level of support. Additionally, social intelligence is always “on.” Instead of spoiling the product, breaking news gives it an opportunity to shine by quickly reflecting rapid changes in sentiment. It’s also cheap, giving scrappy campaigns a much better shot at leveraging data to their advantage.
Unlike horse race polling, social intelligence doesn’t neatly predict the final score. But given its utility and the repeated letdowns of simplistic polls, we should be integrating the millions of data points on social media and other non-traditional data sources into the overall picture.
The polling business will not, and should not, be destroyed. And the media should not make the mistake of sensationalizing its destruction, because they were themselves responsible for sensationalizing the data it produced. But polling will fall prey to some creative destruction, and rightly so. In strange political times, traditional polling needs to be relegated to be one piece of a broader picture, and new sources of information should assume a larger role on stage.
• Adam Meldrum is the founder and president of AdVictory, a media-buying and audience intelligence service.