The Signal and the Noise by Nate Silver focuses on the use of statistical models to forecast and predict real-life events such as weather or presidential elections. Nate Silver is also famous for running fivethirtyeight, a website dedicated to predicting outcomes of political races and sports. Nate was the author of PECOTA, an algorithm to determine the performance profiles of baseball players.

The book is split into two sections: one around a state-of-the-world description of difficult to predict or forecast events such as earthquakes, then goes into ways to improve these events.

For me, the big takeaways are:

Models Should Forecast Results on a Spectrum, Not a Single Yes / No Answer #

Oftentimes the results of statistical models are announced as a single answer: e.g. there is a 10% chance of a magnitude 4 earthquake occurring tomorrow. This type of declaration can cause models to look less accurate than they really are, as it does not include the wide range of outcomes that have some chance of occurring.

Change the Model as New Data Comes in #

A statistical forecast should be a calculation that consumes all available relevant data, to produce a forecast. When one does not modify the forecast when new data arrives, it can reduce the accuracy. This is a difficult concept to accept intuitively, as there is a common need for us as people to stick to a particular belief or philosophy. There is also the concern of being seen as a wishy-washy by changing your opinions quickly as a political candidate. It’s not an issue that can be resolved in all situations, but being open to new data is the important takeaway.

Beware of Overfitting #

The title of the book is referring to the idea of overfitting, and how to derive accuracy from the incoming data. The number of variables one can consume for their forecast could be effectively infinite: as a result, one can build extremely complex statistical models that have worked well for all known data, but perform poorly when evaluated against new data.

This is the same problem that occurs for machine learning algorithms as well: utilizing the right datasets plays a central role in a successful machine learning algorithm, as introducing too much data can result in overfitting.

Side note: I think there’s a machine learning problem that could affect forecasts as well. Machine learning models can accidentally be trained to detect a different distinction or occurrence, such as when a military group tried to train their algorithm to detect tanks, they instead trained it to detect rainy and sunny days, as all pictures with tanks in them were rainy and the pictures without tanks were sunny.

The correlation != causation clause applies to statistical models as well: one could invest a completely unrelated statistical model, if coincidentally the input data matches the expected results. Poor data is also a problem for models.

Use Bayesian inference to Help Evaluate Probabilities #

As a more abstract concept, Bayesian inference is the idea of slowly converging a forecast to the correct model by iteratively adjusting the model in light of new data that validates or invalidates an assumption. The book describes this process as “Bayes Theroem”, the underlying equation that is used to factor in corrections to the probabilities.

To begin this process, one requires a base model: this could be derived qualitatively, empirically, or calculated. With that as a base, the model is tuned with each incoming data point, either increasing or decreasing the likelihood of the event in question.

Bayesian inference is widely applicable because the base model can be a qualitative assumption, and still arrive at the correct model: it just takes longer if the initial assumption is incorrect.

I draw from this a rational approach to arriving at a great model: taking a model created with as much relevant data as possible, and slowly correcting it as new data comes in.

Nate makes a great point in this section, specifically on the approach that we validate or invalidate scientific claims. In an ideal world, all science would lead to conclusions using Bayesian inference: we become more confident in a specific model only as it accurately predicts more outcomes.

The Perceived Value of a Forecast #

One interesting section discussed how the Weather Channel derives their forecasts. Their forecast could be more accurate, but they choose to skew their values for a better experience and better trust from the consumer. Changes such as:

  • rounding the small decimals to make the prediction easier to consumer
  • increasing the likelihood of rain, as being prepared for rain when it does not occur is a small issue in convenience and planning, but a pleasant surprise. In contrast, rain when sun is expected can lead to a significant inconvenience in the planning for the day, and as a result a significant hit on the trust of the forecasting system.

This leads me to a thought I come back to again and again: there is a pure scientific and logical solution to a problem, and there is a more nuanced problem around how humans interpret and understand these results. The Weather Channel’s choice to willfully reduce the accuracy of their results is an example, a conclusion driven by understanding which situations are most inconvenient for consumers.

Conclusion #

Overall this was a fantastic read (or listen). It’s a pleasure to have the opportunity to better understand the thought process of Nate Silver and his perspective on the statistical community and the areas of improvement for the community.