Bits of Books - Books by Title

The Signal and the Noise

The Art and Science of Prediction

Nate Silver

Human beings don't have much natural defences - not particularly fast, strong or clawed. Not well camouflaged and can't fly. Instead rely on our wits. We are wired to detect patterns and respond without hesitation. But downside that often we see patterns that aren't there - the false positives that are really just random noise.

Philip Tetlock prof at U o Cal at Berkeley. In 1987 began ambitious project collecting predictions from a broad array of academic and govt experts on wide range of political and economic questions. Turned out that 'experts' rarely any better than chance, were grossly over-confident about their predictions, and were terrible at assessing probabilities. (About 15% of things they were sure wd never happen did in fact occur, and 25% of events they were absolutely certain of, didn't happen).

And the biggest kicker - the more interviews an 'expert' had done with the media, the worse his prediction record was.

From this he developed idea of foxes and hedgehogs. The hedgehog knows one big thing, and sees everything in terms of that. They see things as physical laws that apply universally - "Country has to live within it's means" etc. The fox knows many little things, and takes a multitude of approaches toward a problem. More tolerant of uncertainty, complexity, and dissenting opinion.

Hedgehogs tend to have spent their lives working on one area of knowledge. Seek simplicity and order. Stubborn - if their model fails it's bc of bad luck, not bc anything wrong with model. Confident - rarely hedge predictions and reluctant to change mind.

Foxes happy to pull ideas from any disciplines, and adopt flexible approach - change if original idea isn't holding up. Tolerate complexity and inexact data - the universe is complicated and unpredictable. Cautious - express predictions in terms of probability rather than certainty.

Too much info is a problem for hedgehogs. When you asked them about individual Senate races, they were influenced not just by the polls, but by personal knowledge of candidates - gossip, news, TV appearances etc. They constructed stories that were neater and tidier than the real world, with heroes and villains, and a happy ending for my team.

The big problem that hedgehogs have is that they are too stubborn to learn from their mistakes. They don't want to acknowledge the real world uncertainty in their forecasts bc they wd have to acknowledge the imperfections in their theories about how the world should behave.

Started FiveThirtyEight partly at dissatisfaction with political TV. They all seemed to focus on irrelevant things - endorsement by some other pol, who had made a clever quip today etc - all stuff that 99% of voters simply didn't care about.

So set up a forecasting model that was basically an average of polls, but weighted them according to past accuracy. That's how it started out, but became more intricate. Introduced probability - the likelihood of a range of possible outcomes. Give someone just a 9% chance of success means that in 9 out of 100 contests, she will win (and that happened in one House race).

Our brains aren't comfortable with this sort of uncertainty, but if, like Nate Silver, you come fro a background of sports and poker betting, you see even the most extreme cases at least once. But in politics, spectators interpret this uncertainty as waffling or trying to hedge your bets. Partly because people are looking at situation as if analogous to physics or biology, where it's simply a matter of getting the facts and then everyone can see the reasons. But politics is more like poker, where you can't see your opponent's cards but you can deduce important facts from his behaviour, and the deductions may change as you learn more. The fox is willing to update his forecast on the basis of newer/better data.

(London Times)

American congressmen have a remarkable investment record. Their personal portfolios beat the markets by between 5% and 10% a year, a return with which Bernie Madoff, the Ponzi-scheme fraudster, would have been more than happy.

The truth, of course, is that nobody can consistently make those kind of returns unless they are doing something illegal, and it is pretty clear what the distinguished House members are up to - insider trading. Using confidential information acquired in their work, they play the market and break the law.

That satisfying fact is one of many to be found in this fat and fascinating book. 'Fascinating' is perhaps not a word you ­associate with statistics. Well, get used to it. Statisticians are to our age what engineers were to the Victorians, the makers of the particular forms of truth we value and crave.

Nate Silver, to pursue the analogy, is being tipped to be our age's Brunel. He is a 34- year-old who gave up an accountancy job to make a living playing poker. He moved on to betting on sports, notably baseball, then made a big splash in the 2008 presidential election when he predicted the winner in 49 out of 50 states, and in all 35 senate contests. In 2009, he was named one of the world's most influential people byTime magazine, and his FiveThirtyEight blog now appears on the New York Times website. His latest forecast for this year's election is that Mitt Romney has a one-third chance of winning.

This book is Nateism in a nutshell as well as being a compendious guide to the world as seen through a statistician's eyes. You should read it, but first, I warn you, it has been appallingly edited. Unnecessarily mangled sentences have not been unmangled, there are misprints, and charts have been mislabelled. Also, like many highly technical writers, when Silver goes off-piste he tends to hit a tree, describing Isaiah Berlin's sinuous, thunderous, subtle prose, for instance, as 'flowery'.

But these are spots on an otherwise handsome face. Silver's book is a useful attempt to explain a complicated and dynamic field. The reason statisticians have become so important is because of that now familiar phenomenon, the information explosion. Information is not inherently useful until it is organised, primarily by ­statistics. Masses of information requires masses of statisticians, but, even then, things can go horribly wrong because, as ­Silver repeatedly makes clear, statisticians are as fallible as anybody else. The great ­statistician and heavy smoker RA  Fisher, for example, all but ruined his reputation by persisting in his belief that there was no correlation between tobacco and lung cancer.

At the heart of this book are its long and detailed analyses of big, complex systems. Silver's baseball and poker analyses are interesting, but those on climate change, banking, earthquakes and politics are vital. His view on the first is sceptical and sympathetic belief, but I suspect global-warming deniers will find more comfort than they expect. On banking, he zooms in, with a grin on his face, on the credit-rating agencies, who basically lied and lied and lied and who, from the first, were hopelessly biased by their sources of income. Earthquakes still pretty much defy statistics, though his charting of the territory is ­gripping. Finally, in politics he remains the world expert on disentangling polling data, but, in reality, there is almost always too much noise in this game to find the signal that matters.

The big message here is all about "the difference between what we know and what we think we know". As Nassim Nicholas Taleb, the author of The Black Swan, never tires of pointing out, being grown up means learning to live with dignity in an uncertain world. Silver does not have Taleb's panoramic ethical and cultural view, but he does share his bracing scepticism.

And he is, crucially, much more optimistic about our ability to get an ever tighter grip on the future by the clever and careful deployment of statistics. The chapter on earthquakes shows the problems involved as well as the astonishing possibilities of solutions. He concludes, however, on a pessimistic note. "There is no reason to conclude that the affairs of men are becoming more predictable. The opposite may well be true." Actually I think this is wisdom, not pessimism, but I can certainly see that it represents an unscratchable itch to a statistician.


We often hear that we live in an increasingly data-rich world. IBM estimates that we are now generating 2.5 quintillion bytes of data each day. That figure - which has 18 zeros - has been rising sharply: more than 90% of the data that exist, from temperature readings to GPS signals and social media posts, were created in the past two years alone.

This exponential growth in information is sometimes seen as a cure-all for our problems, much as computers were in the 1970s. Chris Anderson, the editor of Wired magazine, wrote in 2008 that the sheer volume of data would obviate the need for theory, and even the scientific method. Surely, the argument goes, the vast data sets we have amassed online should help us to forecast, and prepare for, future events - from terrorist attacks to outbreaks of contagious disease and recessions.

Yet it is hard to avoid the impression that we have made scant progress in predicting the future. The first 12 years of the new millennium have been rough, with one supposedly unexpected catastrophe after another, from 9/11 to the global financial crisis to the Fukushima nuclear disaster.

This is not just a problem for the experts. Every time we choose a route to work, decide whether to go on a second date or set money aside for a rainy day, we are making a forecast. Not all of these day-to-day decisions require strenuous thought or expert strategy, but, whether we realise it or not, we are making predictions all the time.

I have spent most of my life working with data and statistics and have developed a track record of successful predictions. In 2003, recently graduated from university and deeply bored at a consulting job in Chicago, I designed a system that sought to predict the statistics of American major league baseball players. It contained a number of innovations - its forecasts were probabilistic, for instance, outlining a range of possible outcomes for each player - and it performed better than the competing systems when I compared their results.

In 2008 I founded the website, named after the number of votes in the American electoral college, which sought to predict the result of the presidential election between Barack Obama and John McCain. The FiveThirtyEight forecasts correctly predicted the winner of the presidential contest in 49 of 50 states as well as the outcome of all 35 US Senate races.

In the course of a few months the website went from obscurity to getting 5m hits a day. I became a frequent guest on American news programmes, where I would be called upon to forecast elections. I now publish my predictions in The New York Times.

Though my speciality is politics, I have written about everything from the most habitable neighbourhoods in New York City to the best time to buy aeroplane tickets to how you can get the most value at the salad bar in a restaurant.

Since 2008 I have made a study of the science of prediction. I have interviewed more than 100 expert forecasters, in fields ranging from the stock market to sports betting to the weather, and looked into the secrets of their success - as well as the reasons they often fail. And I have come to the conclusion that my successes have been fortunate and unusual: prediction in the era of big data is not going well.

Above all, the global financial crisis was a catastrophic failure of prediction The United States did not see the September 11 attacks coming, but as I learnt when I spoke to terrorism experts, the problem was not necessarily a lack of information. As had been the case in the Pearl Harbor attack six decades earlier, the signals were there; we just had not put them together. Without an adequate theory of how terrorists might behave, we were blind to the data and 9/11 became an 'unknown unknown'. The global financial crisis can be viewed as a failure on many levels - it was a regulatory failure, a moral failure, a failure of institutions. Above all, I'’m convinced, it was a catastrophic failure of prediction, with everyone from the ratings agencies to central bankers to blame.

"No one saw it coming" is always a popular thing to say after the fact, and not just in the financial markets, but the truth is that many people did foresee the bursting of the housing bubble and the subsequent shocks to the global financial system. Our naive trust in statistical models yielded disastrous results, and the experts failed to look beyond recent economic history in making forecasts about the future.

On a more routine basis, I discovered that economists are unable to predict recessions more than a few months in advance - and not for lack of trying. In fact, often economists have even failed to predict recessions accurately after they have already begun.

Earthquake detection has recently been given renewed attention, as resources have shifted to mathematical and data-driven techniques. But seismologists have predicted earthquakes that never took place and failed to prepare us for those that did. The Fukushima nuclear reactor was designed to handle a magnitude 8.6 earthquake, in part because some seismologists concluded that anything larger was impossible. Then came Japan's magnitude 9.0 earthquake in March last year.

There are entire disciplines in which predictions have been failing, often at great cost to society. Consider biomedical research. In 2005 a Greek medical researcher named John Ioannidis published a controversial paper entitled Why Most Published Research Findings Are False. The paper studied positive findings documented in peer- reviewed journals: lab experiments that supported the predictions of medical hypotheses. It concluded that most of these findings were unlikely to hold up when applied in the real world.

Why, in a world of data, do so many predictions fail?

Biologically, we are not very different from our ancestors. But it turns out that some Stone Age strengths have become digital-age weaknesses.

Human beings do not have many natural defences. We are not all that fast, and we are not all that strong. We do not have claws or fangs or body armour. We cannot spit venom. We cannot camouflage ourselves. And we cannot fly. Instead we survive by means of our wits. Our minds are quick. We are wired to detect patterns and respond to opportunities and threats without much hesitation.

The problem is that our evolutionary instincts sometimes mislead us into detecting patterns where none exists. Our brains process information by means of approximation.

This is a biological necessity: we perceive far more than we can consciously consider, and we handle this problem by breaking our perceptions down into regularities and patterns.

As a result, our biological instincts are not always well adapted to the information-rich modern world. Unless we work actively to become aware of the biases we introduce, the returns to additional information may be minimal - or diminishing.

The problem is that information growth is vastly outpacing our ability to process it. The human brain is quite remarkable and can store perhaps 3 terabytes, or 3,000GB, of information - which is roughly equivalent to half of Wikipedia or 750,000 songs on an MP3 player. But that is only about one-millionth of the information that IBM tells us is now produced in the world each day. So we have to be terribly selective about the information we choose to remember. We perceive it selectively, subjectively, and without much regard for the distortions this causes.

Further, if the volume of data is increasing exponentially, the amount of useful information almost certainly isn't. There are innumerable hypotheses to test and data sets to mine - but a more or less constant amount of objective truth. Most of the data out there are just noise, and the noise is increasing faster than the signal - the truth. We need better ways of distinguishing between the two.

Big data will produce progress - eventually. How quickly that happens, and whether we regress in the meantime, will depend on us.

As I discovered in my investigation over the past few years, there is hope for prediction. Some fields in particular have made tremendous strides in their ability to peer into the crystal ball.

Weather forecasting, which involves a blend of human judgment and computer power, is one of them. Meteorologists have a bad reputation, but they have made remarkable progress and are now able to forecast the landfall position of a hurricane three times more accurately than they were 25 years ago.

I have met poker players and sports gamblers who were beating the casinos of Las Vegas, and computer programmers who built IBM's Deep Blue and took down the world chess champion Garry Kasparov.

As I have studied the habits and techniques of expert forecasters, I have found they tend to share some essential traits - humility, adaptability, a certain tolerance of complexity. But a particular skill kept coming up again and again: they were all masters of a statistical principle called Bayesian reasoning.

Thomas Bayes was an 18th- century English mathematician; Bayes's theorem is nominally a mathematical formula, but it is really a whole philosophy for what we should do with our wealth of information.

Bayes's theorem asks the forecaster to begin with an estimate of the probability of a real-world event. It does not require us to believe that the world is intrinsically uncertain - it was constructed in the days when the regularity of Newton's laws formed the dominant paradigm in science. However, it does require us to accept that our subjective perceptions of the world are approximations of the truth.

Unless we have grown up playing cards or other games of chance, we are not encouraged to think in this way. Maths classrooms spend more time on abstract subjects such as geometry and calculus than probability and statistics. In many areas expressions of uncertainty are routinely mistaken for admissions of weakness. But it is when we are overconfident about our predictions - whether they are about the stock market, a football game or housing prices - that we fail.

In my forecasting work I like to think I have discovered a few of the secrets of prediction. One should never rely on raw data alone - the best forecasters draw on a combination of facts and interpretation, and the more information you are able to gather, the more accurate your predictions will be.

Models are always approximations of the world and can never substitute for trial and error - something that companies such as Google, which is constantly refining its search algorithm, intuitively understand. A dose of humility, an embrace of uncertainty and a better understanding of probability can only improve our chances.

One of the most spectacularly accurate predictions in history was made in 1705, when the English astronomer Edmond Halley predicted that a great comet would return to the Earth in 1758. Halley had used a combination of data and theory - poring through astronomical records, but guiding his understanding of them with Isaac Newton's laws.

At the time, Halley had many doubters, but the comet returned on cue. Comets, which in antiquity were regarded as wholly unpredictable omens from the gods, are now seen as regular and predictable occurrences.

Astronomers predict that Halley's comet will next make its closest approach to the Earth on July 28, 2061. By that time, many problems that now vex our predictive abilities may well have come within the range of our knowledge.

More books on Mind

(New Yorker article analysing Nate Silver's 2012 results - 50 states out of 50 correct)

Now that the Florida authorities have finally confirmed that President Obama defeated Mitt Romney in the Sunshine State by a margin of 50.0 per cent to 49.1 per cent, we have all the results and data we need to talk about what happened in the 2012 election, and who got it right. Obviously, Nate Silver did - more about that below. But so did most of the other forecasters, and, more importantly, many of the pollsters on whose work all the prognosticators, Silver and myself included, relied. To remind you, here are the results: President Obama won the popular vote by 50.5 per cent to 47.9 per cent, a margin of 2.6 per cent. In the Electoral College, he got 332 votes and Mitt Romney got 206 votes. Obama carried almost all the battlegrounds, which, for these purposes, I will consider as eleven states: Colorado, Florida, Iowa, Nevada, North Carolina, New Hampshire, Michigan, Ohio, Pennsylvania, Virginia, and Wisconsin.

Let's start with the pollsters and Obama's win in the popular vote, which was a bit bigger than expected. For much of the final month, following the President's poor performance in the first debate, Romney led in the national polls - in some, such as the Gallup tracking poll, by large margins. But, in the final ten days, or so, the poll of polls, which combine many different surveys correctly identified a swing back towards Obama. On the eve of the election, the Real Clear Politics poll of polls and the T.P.M. Polltracker both showed the President ahead by 0.7 per cent.

Before I move onto the individual polls, a word of caution. In almost all of them, the margin of error was three per cent or more, which mitigates against putting much emphasis on small differences. Moreover, polls are snapshots, and not forecasts. Conceivably, some of them were accurate when they were carried out but things changed between then and Tuesday. Even allowing for these factors, though, it's fair to make some comparisons. All pollsters love to be vindicated on Election Day, and the sensible ones compare their numbers with the final outcome to see if they need to make any adjustments in subsequent elections. In the weeks leading up to the election, dozens of national polls were carried out. Interestingly, of those executed in the final days before Tuesday, the three that produced findings most closely resembling the final result were all internet-based surveys. A Google Consumer Survey, which was published on Monday, showed the Obama-Biden ticket leading the Romney-Ryan ticket by 2.3 per cent. On the same day, the Reuters/Ipsos daily tracker had Obama leading by two points. And a so-called 'megapoll' from the Economist/YouGov, which involved the pollster questioning 36,472 likely voters online, also had Obama leading by two points.

Of the traditional polls, which were based on telephone interviews, the two surveys that came closest to matching the actual outcome were from the Pew Research Center and ABC News/Washington Post, both of which had Obama leading by three points. Other polls that showed Obama ahead as Tuesday approached included the final surveys from NBCNews/Wall Street Journal, CBSNews/New York Times, the Investors Business Daily/TIPP daily tracking poll, and Newsmax/Zogby, an online tracking poll.

Another survey worth mentioning is the panel survey from Rand, which, rather than sampling new voters every time it took a new poll, followed the same individuals - three thousand and five hundred of them - and tracked their preferences over months. The Rand survey showed Obama consistently ahead, and its final update showed him leading by more than three points.

The surveys that showed Obama losing, and Romney ahead, going into Tuesday included the trackers from Gallup and Rasmussen. Gallup, in particular, has come in for much criticism, which isn't surprising. It's the oldest and best-known poll in the business, and people expect it to do better. (In political circles, most folks expect Rasmussen's results to lean towards the G.O.P.) Interestingly, Gallup's tracking poll of registered voters, rather than likely voters, showed Obama with a three point lead on the day before the election. It successfully captured the last-week swing to Obama in the wake of Hurricane Sandy. Evidently, Gallup's main problem was in deciding who was likely to vote. In making this judgement, which was based on a number of factors, it appears to have excluded too many Democrats.

Of course, it was the outcome in the battleground states that ultimately determined the result. For months, most of the swing-state polls showed Obama with a steady lead. On the night before the election, the polls of polls from Real Clear Politics and T.P.M. both had Obama leading in eight of the ten battlegrounds, the exceptions being Florida and North Carolina. On average, then, the pollsters called all the swing states correctly, except for Florida, which many of them got wrong. On the day before the election, the Real Clear Politics poll of polls in Florida had Romney leading by 1.5 per cent; the T.P.M. poll tracker had him up by 1.2 per cent.

More books on Politics

Since Florida was close all along, it's not particularly notable that many pollsters had Romney ahead. A bigger surprise was that, in many of the swing states, Obama's margin of victory was bigger than the polls had indicated - often considerably bigger. Only in Florida, Ohio, and North Carolina were Obama and Romney's final vote totals within two percentage points of each other. Obama carried Colorado by 4.7 points, Iowa by 5.6 points, New Hampshire by 5.7 points, Nevada by 6.6 points, and Wisconsin by 6.7 points. In these battlegrounds, the race didn't end up particularly close.

Still, if the pollsters' exact numbers didn't match up with the actual vote tallies, they generally called the winner correctly. Of course, not all of the pollsters did equally well. In general, the big reputable polls, which use real interviewers rather than robocalls, and which didn't weight their results in favor of the G.O.P. - that's you again Scott Rasmussen - were generally pretty reliable. But at the local level there was also a lot of junk polling, which added quite a bit of statistical 'noise' to the picture.

Even in Florida, where most of the polls in the last week showed Romney leading, several polls showed Obama with a lead of a point or two: NBC/WSJ/Marist College, CBS/New York Times/Quinnipiac University, and Public Policy Polling. During the campaign, these pollsters, particularly NBC/WSJ/Marist, received criticism from the right, for allegedly stacking their results in favor of the Democrats. But they had the last laugh. A couple of tracking polls that monitored the state race separately from the national contest also deserve an honorable mention. Both the Reuters-Ipsos poll - the same one that got the national race right - and the Newsmax/Zogby poll, which also called an Obama victory at the national level, showed the race in Florida virtually tied a day or two before the election. And also worth mentioning again: the final YouGov megapoll, which, when broken down to the state level, showed Obama leading by one point in Florida. (His actual margin of victory was 0.9 per cent.)

In Ohio, which was the subject of exhaustive surveying and analysis, the pollsters were also vindicated. Of twenty-nine polls carried out in the final three weeks of the campaign, just one - a Rasmussen survey - showed Romney ahead. Ultimately, unlike in most of the other battleground states, Obama's margin of victory was a bit smaller than the polls had indicated: 1.9 points compared to a 2.9 point margin in the final R.C.P. poll of polls. Two local surveys - the Ohio poll and a poll for the Columbus Dispatch produced numbers that were within one per cent of the final result.

The partisan dispute about how the pollsters were counting the numbers was particularly bitter in Ohio: many conservative analysts claimed the mainstream pollsters were mistakenly assuming that many more Democrats would turn out than Republicans. This assumption turned out to be perfectly justified. According the National Exit Poll, thirty-nine per cent of the voters in Ohio identified themselves as Democrats, compared to thirty per cent who identified themselves as Republicans. The conservatives' conspiracy theory was debunked.

Now onto the forecasters, starting with myself. On Monday morning, in making my final update to the New Yorker's electoral map, I predicted that Obama would get 303 seats in the Electoral College and that Romney would get 235. I made that projection primarily on the basis of the state polls, and, as I pointed out in a subsequent post, it was in line with the consensus opinion.

I got forty-nine of the fifty states right, which is pretty good. But Florida went for Obama. Why didn't I foresee that happening? In retrospect, I didn't attach enough weight to the tracking polls, which showed a nationwide swing to Obama over the final days, probably because of his handling of Hurricane Sandy. I wasn't oblivious to what was happening. I wrote a post about it, and I called Colorado and Virginia, which I'd previously had as toss-ups. But I left Florida in the Romney column, citing the fact he still had a narrow lead in most of the local polls. That turned out to be a mistake.

Meanwhile, the more mathematical forecasters were also anguishing over Florida. As the national polls indicated movement in Obama's favor, their statistical models, which combine national and state polling, alerted them to the fact that the President's chances of taking the Sunshine State were approaching fifty per cent. That is the advantage of having a model as opposed to just staring at polls and scratching your head. Still, it was a very close call. In tabulating his final state-by-state projections at FiveThirtyEight, Silver listed Florida as a "Toss-Up," and in his last pre-election post, published early on the morning of November 6, he wrote, "Florida remains too close to call." He put Obama's chances of victory at fifty per cent exactly and projected that the final percentages for the two candidates would be 49.9 and 49.9. (As of this writing, these figures are still on the site in the table showing the projections for Florida.)

But in the final twenty-four hours before the vote, FiveThirtyEight also showed Florida light blue on its electoral map, indicating that the probability of an Obama victory, to one decimal place, was 50.3 per cent. Later on Tuesday, as he live blogged the vote returns, he wrote: "In the final pre-election forecast at FiveThirtyEight, the state of Florida was exceptionally close. Officially, Mr. Obama was projected to win 49.797 percent of the vote there, and Mr. Romney 49.775 percent, a difference of two-hundredths of a percentage point."

It might be said that calling the race too-close-to-call and also coloring Florida light blue amounted to having it both ways. But Silver's model did point (every so slightly) to an Obama win, and overall he deserves a lot of credit. In a previous post, I queried whether mathematical models of the type that he uses add anything to the polls they rely on, and to simple polls of polls, such as those of R.C.P. and T.P.M. In this case, they did. Silver's final forecast in Florida clearly beat the polls of polls, which had Romney ahead. He also nailed the popular vote, again beating the polls of polls, and he correctly identified the last-minute swing to Obama. For the second election in a row, he left many of the pundits in the dust. As promised, a bottle of champagne is on its way to the offices of FiveThirtyEight - not that its creator needs any more prizes now that his new book is number two on the Amazon bestseller's list.

Still, as Silver readily conceded when I spoke to him on Saturday, there was an element of good fortune involved, especially when it came to coloring Florida light blue. "That was just a case of dumb luck basically," he said modestly. At least one other mathematical forecaster wasn't so fortunate. On the eve of the voting, Sam Wang, the man behind the Princeton Election Consortium, looked at his model and saw it indicating that the race in Florida was basically tied. Still, Wang felt obliged to make a prediction. "We are all tossing coins," he wrote in calling the race for Romney. "I am prepared to lose the coin toss." Wang did lose and Silver did win, but he wasn't the only one. Simon Jackman, a Stanford University political scientist who created the Huffington Post's pollster model, and Drew Linzer, an Emory University political scientist who runs the Votamatic website, both called all fifty states correctly, although, they, too, hesitated over Florida. On Tuesday, morning Jackman published his final prediction, noting "We are not particularly confident about the forecast for Florida." About the same time, Linzer, in making his final prediction, described Florida as "a true toss-up" and said he "would not be surprised" if it went for Romney. However, both Linzer and Jackman did ultimately predict an Obama victory, which, according to their models, was just about the most likely outcome.

In this somewhat equivocal manner, Silver, Linzer, and Jackman correctly predicted the 332-206 outcome in the Electoral College. Who, then, was the ultimate winner of the forecasting gold medal? For the sake of argument, I'll use the popular vote as a tiebreaker. As far as I could see, Linzer didn't issue a forecast for the popular vote, but Silver and Jackman did. Silver's final prediction was: Obama 50.8 per cent, Romney 48.3 per cent. Jackman's prediction was: Obama 50.1 per cent, Romney 48.4 per cent. Neither got the final voting figures - 50.5 per cent to 47.9 per cent - exactly right, but Silver was the closest. Jackman's prediction of a 1.7 per cent winning margin for Obama turned out to be a bit low. Silver's prediction of a 2.5 per winning margin proved to be almost spot on, and the gold medal goes to him.

In the bigger picture, though, the lessons of the campaign are about more than FiveThirtyEight. First and foremost, reliable polling remains the bedrock of any serious electoral analysis. It isn't easy, and it takes a lot of grunt work, but without it we would all be lost. In this respect, the 2012 election was a hopeful one. Solid unbiased surveying, of the sort that organizations like Pew and the pollsters associated with the major newspapers and television networks engage in, was rewarded. Blatantly skewing the figures was punished. And there was also evidence that online polling, which facilitates the creation of very large samples at relatively low cost, can be informative and reliable.

Second, it's time for election analysts (myself included) to take the mathematical forecasters in general more seriously, and to incorporate their findings into their analysis. "It's not 'a Nate thing,' Jackman noted after the election. "It's a data-meets-good-model, scientific-method thing." Silver readily agreed with that sentiment. He pointed out that the model he uses is very similar to Jackman's, saying, "the DNA is ninety-five per cent the same." And he also reminded me that Linzer, of Votamatic, predicted as far back as June that Obama would win Florida and all the other swing states, except North Carolina. “It was a big year for data-driven analysis” in general, he said.

Nobody could argue with that. I still suspect that one of these years there's going to be a 'Black Swan' election that confounds the modelers. But looking ahead, the burden of proof is going to be on the skeptics. If the probability models say candidate X has an eighty per cent chance of winning, and you think X is going to lose, you will have to explain what it is the models are missing. That is the Nate legacy.

LT interview with Silver

(And a Slate reprint)

It's been a busy week for Nate Silver. On the heels of correctly predicting the presidential election in all 50 states, Silver flew into Chicago - where he lived following his graduation from the University of Chicago until 2009 - to give a talk for the Humanities Festival while also promoting his bestselling book, The Signal and the Noise. Chicago magazine caught up with Silver Saturday afternoon at his hotel to talk about his post-election life, run-ins with political pundits, and of course, what drunk Nate Silver is really like.

So what has life been like since Tuesday?

It's been really strange. I was going to CVS to get like toothpaste and stuff, and people stopped me in line. It's just a little hard to get used to. I assume that'll wear off to some extent but, yeah, there are multiple "Are you Nate Silver?" sightings every day now.

How does that compare to 2008?

If that's the metric, having someone interrupt you on the street - you might get, 2008 was happening maybe once per week, right? Now it's like six times per day.

You've been characterized as a wizard or a witch - what's your reaction to that?

I'm trying to maintain some form of detachment from it, almost like it's happening to another person or another character. But it's weird, and goes to show you what can happen in the Internet age, where things can take off really, really fast.

In your book - and on your blog - you try to make the distinction between accuracy and honesty, but one of the major criticisms against you has been that your methodology favors Democrat-leaning polls. How do you separate your personal biases from the data?

If you actually look at our track record, you'll see that we really don't have any bias. When we've missed, we've tended to call races for Republicans mistakenly instead of Democrats. First of all, I think it's odd that people who cover politics wouldn't have any political views. The analogy I make sometimes is the O.J. Simpson or Michael Jackson trial. You know, you're supposed to find the twelve people, for a jury, who have no impression of Michael Jackson? How can you be a normal person and not have some view of Michael Jackson? How can you cover politics and not have any sense for where you think the truth lies in the problem? That disturbs me. A lot of journalism wants to have what they call objectivity without them having a commitment to pursuing the truth, but that doesn't work.

Objectivity requires belief in and a commitment toward pursuing the truth - having an object outside of our personal point of view. But with that said, look, I might have different types of rooting interests here, one being the public policy outcomes I might like. But there are also the kinds of biases in terms of rooting for your forecast. If we had had Romney ahead, then I would be rooting for Romney to win on Tuesday night because that's my much bigger priority in terms of my career.You get some conservative critiques that just aren't actually even bothering to look at what we do and saying, 'He voluntarily rates the polls.' No, I'm not going through thousands of polls a day weighting every poll to the fifth decimal place.

You've heard Dean Chambers's comment that you were too 'effeminate' and 'small of stature' to be trustworthy. How do you deal with being put under a quickly increasing amount of personal scrutiny?

It just went to show how deluded people can be - having an emotional reaction to the data they don't like and not really a rational reaction. Also, it kind of went to show how this guy just has fantastical views of, "Oh, it's a conspiracy. They're rigging the polls against Romney." He thought even the Fox News poll was rigged for President Obama. And yet there were some news reporters who were treating him seriously. You know, "He said, she said, here's what some people say." So he demonstrated, I think, with those comments, that he's not someone who deserved a lot of attention, I suppose.

Why has the data-driven approach to predictions received so much attention recently, and what role have you and your team played in that?

I think people like those types of stories because - Moneyball is a part of it, right? And I think we have so much information now, we have so much data. We need better practices, strategies, techniques, to make better use of it. I think people are hungry for it. I think people do - appropriately - not trust the messenger so much. They don't necessarily trust the reporter or the pundit to relay all the facts to them when they have so much information at their disposal for free basically. So it plays into that curiosity for what we do with all that information.

What does the growing popularity of this approach mean for the future of journalism and punditry in particular?

I think punditry serves no purpose. I don't care if it has a future. For journalism though, there are two ways to do it. You can go and take your traditional journalist, and many of them are fantastically good reporters, very good writers, certainly The New York Times, and try to train them more in some math and probability and statistics. Or you can hire people who come from that background, where maybe now some papers are going to hire economics majors and math majors, fields that you wouldn't typically enter if you want to go into journalism.

But I would think - I guess I would predict - you'll see more data-driven analysts or reporters. I think at some places, there are questions about where do these journalists fit in and what do you call them? Because the term reporter is now in context, but what is it, right? The New York Times, by hiring me, took a step to do that. The Washington Post has done that with Ezra Klein, but the Times, some of the best journalists are those who make their interactive graphics. And they really do consider themselves journalists, in terms of, "We're trying to present complex information in a way that helps elucidate the truth to people."

Based on the way your model has predicted this year's election, do you think Republicans will - in the future - have an aversion to the statistical approach?

I think people knew that Obama would have a better method-driven operation for voter turnout and so forth, but now you hear some of the stories from volunteers on the Romney campaign. You know, "The election's over now, fuck it. We're going to say it. These systems were badly designed, were not functioning properly on Election Day itself."

And that's surprising. Look, in 2000, Bush and Rove had taken the lead in voter targeting and data-driven stuff. In 2004, they were still very good, and Kerry caught up a little bit. Then in 2008, Obama sort of kicked McCain's butt as far as data-driven stuff goes, and you would think that Romney would have heeded that lesson. There were a lot of other factors, obviously, but why not go with the Bush/Rove/Obama model, right? That's what really won the last three elections.

But Romney seems to have replicated the same errors, and I'm not sure why. There are some theories that because, I guess, the people who populate these offices tend to be younger and more tech-savvy and more urban, and those are Democrat-leaning demographics. It's hard to find good people, who are Republicans, who would be optimal employees in that respect. But it could be a certain amount of stubbornness I think, a certain amount of ideology at play, where you think, "Our message is so powerful that once America hears our message, and how much of a failure Obama has been, then we won't need to turn out extra voters in Roanoke or whatever." It's a variety of things, I think, but I would think that the next Republican candidate would be less likely to make that mistake, but who knows?

What - theoretically, in your mind - would have happened if Romney had won? Would stats have taken a hit?

I think they would have. In part because - and we did say there's a nine percent chance he was going to win. It's tough to flesh that out sometimes. It's too abstract for people. But I'll say, for example, we had a Democrat in North Dakota who only had a nine percent chance of winning her race, and she did. Over enough cases, you're going to get some of those nine percent chances coming up, and had that happened in the presidential race rather than a senate race in North Dakota, I don't know what would've happened. It would've been bad.

Would your career have been on the line?

I'm not sure it would have totally been on the line, but frankly, I'm not sure I would have kept doing politics after that, just because I don't really like politics very much. If it's something that informs people, and which people really like, then that's fine. But it's not a good career move, over the long run, to be banking on this once-every-four-years bet. If you're doing stuff in baseball or poker, you play a hand badly or you get lucky - it's hard to separate those two sometimes - and you re-buy and make the best decision next hand. But having so much on the line every four years is a little nerve-wracking.

Why do election predictions if you don't like politics to begin with?

I was frustrated with - I guess I don't like the 'politics' part of politics. I think the elections are a fascinating thing, both in terms of how they function in our democracy and in terms of problems you can study with numbers and metrics. I guess I don't like the people in politics very much, to be blunt. But also, I used to work for Baseball Prospectus, and I'd seen how the analytics in that domain had improved so much and the media coverage had improved so much, and I felt like politics was still in the Stone Age, at least in the way it was covered by pundits and by the press.

Do you see yourself staying in journalism, or moving into private consulting or public policy?

I don't know. I have a lot of choices to make right now, and I'm trying not to rush into it too much. There are a lot of different career paths, and I have to, frankly, avoid the tendency to spread myself too thin. There's the weird analogy now between myself and a celebrity chef. You're probably very good at running your one restaurant. You get some notoriety and it's deserved. But someone will put their name on 12 different properties that become really average restaurants in the end and lose what made them special. So I want to be careful about spreading my brand too thin. There are different routes. I could start a consulting company. I could do public policy. I could do Hollywood-related stuff. I could just keep doing what I'm doing. They're all interesting options. I just have to weigh them.

I'm sure you've heard of the Twitter trend, #DrunkNateSilver. What does drunk Nate Silver actually do?

I do what everyone else does, which is argue about stupid things with my friends. I don't become dark and ironically evil.

Walking into maternity wards and predicting the 53rd President of the United States?

No, no, no. But that could be a good TV show. It's a character named Nate Silver, but it's not actually me. So you have a public image that I kind of call your public brand, but there's something that just kind of took hold beyond that, and it's kind of weird. I'm hoping that it calms down a little bit, and I'm sure it will.

Back when you were living in Chicago, one of your first blogs was the Burrito Bracket, in which you attempted - via statistics - to find the best burrito in Wicker Park.

Yes. I would just go and eat tacos and burritos for lunch a lot and compare those, like get the same food item, like a steak burrito, for consecutive days at different locations. And then try to have quasi-scientific criteria for judging those, just thinking about all the different characteristics for a good burrito or taco.

Meat-to-cheese ratio.

Meat-to-cheese ratio. It's funny. I think it's always helpful when you're trying to evaluate something to have a disciplined set of criteria. For a while, I was trying to rate every restaurant that I went to in New York. I would find that I would rate a place four stars and go back a few months later and have a very different view of it. You realize it's because the food quality can change, but your mood changes so much, where you've had a really stressful day at work, and you're out with a friend who's in a bad mood, and the service is slow, and the food kind of tastes worse. Whereas, if you're in a good mood, and you've had a couple cocktails, and you're not feeling stressed, then everything seems wonderful. That exercise taught me that when we go by our gut or our mood - "Oh, our gut will tell us everything we need to know" - believe me, it's very useful for a lot of things, but it can also fool us.

Did you ever figure out the best burrito?

No, I never finished. Even I managed to get a little sick of Mexican food eating it basically every day for a month in row. I should finish it. The problem is there are all these places that have opened or closed. I went to this place called Big Star, and it was pretty good, so that might be the winner, maybe.

Before your book was released, you said it was a risk, since book sales follow a non-linear model and only a few become runaway hits. Now you're at number two on I guess you weren't expecting this?

Yeah, I thought it would get a little bounce. But now I've gotten an 800 percent sales boost, so it's good. My publisher is very happy, I should say. But yes, books do have this sort of viral quality - even in the most traditional form - where word of mouth matters some, and you start to see them on prominent displays in airports, and people start to write about them, and it sort of snowballs. I think the fact that - I do think it's a serious and substantive book, so the fact that people can pick it up and say, "Hey, this kid's not just a flash in the pan. He's making serious arguments about how we look at information." If people are actually reading the book, then I'm happy. I hope people are able to get something out of it.

Books by Title

Books by Author

Books by Topic

Bits of Books To Impress