As I write this, Obama has 303 electoral votes and leads by 2.4 percent (50.4 to 48) in the popular vote. Florida seems extremely likely to be called officially for Obama at some point, which will put him at 332.
In the final run of his model that aggregates polling data and makes certain assumptions based on history, Nate Silver at FiveThirtyEight showed Obama the clear favorite in states totaling 303 electoral votes and a tiny favorite to win Florida, the only true tossup. The model predicted the final popular vote margin would be 2.5 (50.8 to 48.3).
So the polls were right. Not all of them, of course. But viewed collectively, the polls (and Silver) pretty much nailed it — just like in 2008. Independent polling companies and analysts like Silver — the best known but certainly not the only statistician aggregating data — have a vested interest in getting it right. Their livelihoods depend on turning out quality products. In spite of the inherent difficulties of heavy cell phone use and low response rates, the pollsters are figuring out how to generate solid data. (FWIW, on Monday on my own blog, I predicted Obama would get 332.)
Given margins of error, any single poll of a moderately close state could be wrong even if the poll was conducted under ideal conditions. But we don’t have to rely on individual polls or pollsters in presidential elections. In the final 33 polls of Ohio, Romney led in two, two others were tied, and Obama led 29 times. In 12 of those polls, Obama’s lead was 4 points or more. As it turned out, Romney did better in Ohio than that mass of polling indicated, but there was almost no chance of a different outcome given Obama’s consistent leads.
As a numbers guy, I’ll confess to being puzzled by the reluctance of so many Romney supporters to accept the polls generally and Silver’s thoughtful, detailed work specifically. He’s awfully transparent about how he works. Some people were clearly tricked by those big names Gallup and Rasmussen, which right up till their tight polls on the final days, showed leads — sometimes solid ones — for Romney. But, as Silver patiently explained, Gallup’s likely voter polls had a history of implausible volatility and Rasmussen had a recent history of having a Republican lean.
The Gallup likely voter model is especially problematic. An article in Slate almost a year ago detailed an interesting phenomenon: about 55 percent of registered voters in 2008 who told pollsters they would not vote actually did vote. And not all those who said they would vote did vote.
It was also interesting over the last few months to hear the exasperated assertions that any poll that showed more Democrats going to the polls than Republicans must be skewed. These arguments missed the simple fact that pollsters weren’t adjusting their results to match a specific preordained party identification ratio. Party identification was a question in the polls, and party ID is fluid. Plus we have a long history of more voters identifying themselves as Democrats than as Republicans, even in strongly Republican years. Silver wrote a long post back in September explaining that over the years the polls have not shown any consistent bias to the right or to the left. In that same post as well as in others, he clearly documented that state polls tend to be more accurate than national ones.
But it seemed like the more patient Silver was in his explanations, the more he was attacked for things he wasn’t doing.
I hope that work like Silver’s will force the punditocracy to be more thorough, responsible, and mathematical when making predictions. For example, consider that George Will predicted Romney would get 321 electoral votes, including Minnesota, where Obama had led in every single poll, by about 8 points on average. Obama took the state by about 8 points.