My GUESSES (not predictions) published on November 5 and 6 all turned out to be close but not quite right. My guess for the final Electoral College vote of 281 to 257 was off the final tally of 332 to 206 because I assigned the swing states of Colorado, Virginia, and Florida incorrectly. My guess for the Senate was 53 to 47 while the actual result was 55 to 45; my bias leaned in the other direction. My guess for the House of Representatives was 241 to 194 while the actual result seems headed for 234 to 201, which is in the direction of my bias.
My guesses were not great, but all three were at least in the ballpark. While I wish they were more accurate, they were not bad for not much more than cursory looks at some public polling data. And as it turns out, my guesses were actually a lot better than many others — including many who should have done better.
Anybody who examined and analyzed the DATA provided by numerous publicly available polls about the presidential election either nailed their prediction or at least came close. The fact is the DATA for many months always had the incumbent with the lead in the Electoral College vote. Data-driven analysis would not have shocked anyone about the outcome of the presidential election.
It’s okay to be wrong in this parlor game of predicting the results of an election as long as the process of making the prediction is a reasonable analysis of the data. Polls are not necessarily predictive as they are snapshots of answers as of the day they are given, and in a close race one could simply be on the wrong side of trying to figure out where one state is going to fall. But it is not acceptable at all to claim to “predict” an outcome by ignoring data and simply making an emotional statement in support of a favored candidate.
The recent election revealed those who have credibility and those who have no credibility in making predictions. Those who lack credibility in a particular area should not be used as a source or otherwise relied upon in that area.
Among many, the people in the list below that were cited in recent news reports have demonstrated credibility in predicting an election outcome stemming from their data-driven process that led them to results that were either precise or close. Their opinions on future election outcomes might be worth considering.
Nate Silver (NY Times)
Josh Putnam (Davidson College)
Simon Jackman (Stanford University)
Drew Linzer (Emory University)
Sam Wang (Princeton University)
Ezra Klein (The Washington Post)
Philip Klein (The Examiner)
Among many, the people in the list below that were cited in recent news reports have demonstrated they have no credibility in predicting an election outcome, apparently because they offered emotional punditry and little to no actual analysis of relevant data. Their opinions on future election outcomes are NOT worth considering. (For the record, lacking credibility about making predictions is only that. These people may very well be credible in other areas, capable of doing their real job, and decent human beings.)
Dick Morris (FoxNews)
Michael Barone (The Examiner)
James Pethokoukis (American Enterprise Institute)
Kenneth Bickers (University of Colorado) and Kevin Berry (CU-Denver)
George Will (The Washington Post)
Jim Cramer (CNBC)