Nate Silver is something of an authority on political forecasting. In 2008, his blog FiveThirtyEight correctly predicted the outcome of the presidential race in 49 out of 50 states. (In that same election, he was also right about all 35 senate races.) Bob sits down with Silver to talk about the 2012 election as well as his new book, The Signal and the Noise: Why So Many Predictions Fail—But Some Don't.
BOB GARFIELD: Nate Silver, of the New York Times FiveThirtyEight blog, has a new book out, titled, “The Signal and the Noise: Why So Many Predictions Fails-but Some Don’t.” In it, he cites a landmark study by psychologist Philip Tetlock who evaluated a sample of political “experts” on their ability to predict events. Every month for a total of two decades, Tetlock asked each expert to predict the likelihood of possible future events, from Quebec seceding from Canada to the fall of the Soviet Union. The results, says Silver, were embarrassing. The experts performed barely any better than random chance, though, interestingly, they didn’t all perform equally – badly.
NATE SILVER: One class of people did a little bit better, and that’s what he called “foxes.” [LAUGHS] Ironically, they’re the people who are not looking for a perfect prediction; they’re looking to take different types of information, consider different hypotheses and acknowledge that the world is complicated. The hedgehogs, who are the counterpart to the foxes, tend to be big believers in a grand unified theory, so they see everything through the prism of say, class conflict or everything through the prism of oh, media bias, potentially, or they think they’re right about [LAUGHS] everything, pretty much, through their theoretical conceptions of how things behave.
BOB GARFIELD: The people who are pundits on TV, making political prognostications, I’m going to assume they’re not foxes – they’re hedgehogs.
NATE SILVER: [LAUGHS] So one other component of the study that Tetlock did is he looked at how many mainstream media appearance the people in his survey panel had done. And he found that the more often people did [LAUGHS] media, the worst their forecasts were –
- ‘cause then they have incentive to say outlandish things. And I talk about the example of Dick Morris in March of this year said he thought Donald Trump would run and be a very formidable candidate in the Republican primary, so in 2008 said Barack Obama would win like Tennessee and, and West Virginia, who thought that Hurricane Katrina would help President Bush. You could almost create a hedge fund just betting against whatever he says.
But he still remains in the kind of regular news rotation, and people take him seriously.
BOB GARFIELD: There is no penalty in the world of television punditry for being wrong. There’s a great incentive to be bold.
NATE SILVER: The classic media bias is rooting for the story. Certainly, after the conventions, I think that the press became very interested in the Romney as imploding story, but in the long run people sell more papers and they get more listeners and viewers if you have a close down-to-the-wire election. And so, in 2008 on The McLaughlin Group, the long-running show where they have four reporters come on or four pundits, I guess, depending on how flattering you want to be toward them – the weekend before the 2008 election this was a time when Barack Obama led by seven points in the national polls, he led in almost literally every single poll, every single swing state. The economy had collapsed. But the three of the four panelists of The McLaughlin Group said it was too close to call. Monica Crowley actually says she thought McCain would win by half a point. The next week on the same show she implied that Obama’s win had been inevitable because the economy had collapsed –
- all the obvious things, which she had neglected to consider like literally a week earlier. It’s like the whole group has amnesia.
BOB GARFIELD: When you read FiveThirtyEight blog, you get like a weather forecast. There’s a 90% chance of rain-
NATE SILVER: Sure –
BOB GARFIELD: - means that 90% of the time under identical conditions, it’s gonna rain. That’s how you do it?
NATE SILVER: That’s how we do it because we’re not pretending that we can make an exact forecast of an election. If you’re in early October, there are a lot of contingencies that could move the polls but we have some sense for what the relative probabilities are, so presidential debates, for example, have sometimes moved the polls by about three points. There have been cases where even on Election Day there were late-breaking developments, like in 2000 when George W. Bush was revealed to have had a DUI arrest that moved the numbers. And there’s ambiguity based on the fact that only about 10% of people now respond to surveys, even to the best and most thorough surveys. And so, all pollsters are hopeful that the 10% of people that do answer the polls are representative of the 90% who don’t but who might vote in the actual election. So there is some ambiguity, but we can use history to say how accurate have the polls been at different points in the campaign and give you the betting odds, like a handicapper would.
BOB GARFIELD: In your book you cite that more than 90% of all data collected in human history has been amassed in the last –
NATE SILVER: In the last, in the last two years.
BOB GARFIELD: Which you would think would enable us to crunch the numbers and be extraordinarily accurate in predicting all sorts of things, but?
NATE SILVER: Well, a lot of that data is like cat videos on YouTube –
- or Justin Bieber testimonials sent via text message, so we certainly haven’t produced 90% of all the useful information in the past two years. For example, we’re at the point in the election now where you get maybe 20 or 30 polls every day, between state polls and national polls. If you’re a Romney supporter and you pick the three of those polls you like best, based on random variance and anything else, you’ll always be able to tell a happy story about what happened. If you’re an Obama supporter, of course, you can do the same thing. So if people are going to cherry pick the evidence when you give them more of a choice, then they can isolate themselves more from what an objective forecaster might say.
BOB GARFIELD: In the introduction to your book, you talk about two historical watersheds that changed our thinking about prognostication. One was the printing press.
NATE SILVER: Before the invention of the printing press in the 1400s, knowledge was really, really expensive. It literally cost the equivalent of 20,000 dollars to copy a book manuscript. And then, all of a sudden, with Gutenberg’s invention, you could mass produce books, and you had, finally, the accumulation of knowledge. But you had 200 years of holy war first. The Protestant Reformation was directly tied to the printing press. People were going to disagree about their interpretations of the Bible or of other things, and now they had evidence to prove their point of view. And you had a very bloody couple of centuries before you finally saw progress made, but it took 200 years.
Now we have a similar step change in information, which is caused by the Internet. I’m 34, right? When I was growing up you actually had to wait for the morning paper to come out to see the baseball box score, you know, which is kind of astounding now. But we haven’t developed our protocols for how we use all that information and I think it’s, just like after the printing press, led to a lot of conflict and partisanship and mistakes, for the time being.
BOB GARFIELD: Well, if your analogy is right then, maybe now that society’s getting used to this growing amount of data, we’re heading towards a new enlightenment?
NATE SILVER: Well, I’m not sure about that either. I mean, people think as information increases you’ll reach some singularity. I do think we have to also acknowledge that as cool as our computers and our technology might be, it’s still humans who have to design them and implement them. If you input garbage into a model, it’ll spit garbage out. We’re gonna make some mistakes at first, as we have. I mean, everything from the financial crisis to the earthquake in Japan were not predicted or, in the case of Japan, were predicted not to happen. The seismologists there thought you couldn’t have an earthquake that large. But I do think as we get better at knowing where our technology can help us and where it might lead us astray, where you still need human judgment and where you can kind of press the “on” switch and turn things over to the machines, then we’ll begin to make progress. But the point of the book, I suppose, is the sooner that we admit that we have a problem [LAUGHS], that we’re not as good at prediction as we think we are, then we can move toward enlightenment, one step at a time.
BOB GARFIELD: Nate, thank you very much.
NATE SILVER: Thank you.
BOB GARFIELD: Nate Silver writes the FiveThirtyEight blog for the New York Times and is author of “The Signal and the Noise: Why So Many Predictions Fail - But Some Don’t.”
[MUSIC UP AND UNDER]