The future is not written, it is probabilistic.

Let’s discuss how to think about probabilities, and how to objectively assess the accuracy of a forecast.

- The only valid forecasts are those expressed as probabilities.
- A forecast is never simply right or wrong.
- The Brier score measures the accuracy of a forecast.

**Full transcript :**

In this lesson, you’ll learn how to express a forecast as a probability, how to score a forecast’s accuracy.

People make predictions all the time, but they often use vague language such as “victory is *likely*”, “there is *a chance* of rain”, or “the team is *favorite* to win the game”. Such predictions are quite useless unless they are properly quantified.

“Favorite” just means *more* likely than something else, but it doesn’t tell you *how much* more likely.

“Likely” could mean 51% to 99% or anything in between.

“A chance of” could mean 1% or 99%, or anything in between.

So instead of using vague language, a proper forecast is a *quantitative* estimate of the *likelihood* that a future event will occur.

We are all familiar with one type of forecast: the weather. Saying there’s a 80% chance of a sunny day tomorrow is a proper forecast, in the sense that it has a defined time-frame and precise quantitative value.

But forecasts can also be about sports, economics, geopolitics, even epidemics, as we discussed earlier, or anything else that can be observed in the future.

But what do we actually mean when we say an event has an “80% chance” of happening? Does “80% chance of a sunny day” mean that it *will* be sunny? If it rains, was my forecast right or wrong? Many people have a hard time interpreting numerical probabilities such as “80% chance”. Research suggests that it’s helpful to think in terms of *relative *frequencies.

For example, “80% chance of a sunny day” means that it would be sunny 8 days out of 10. But it also means that it wouldn’t be sunny 2 days out of 10.

For some forecasting problems, it’s possible to look at historical data and obtain a good estimate of relative frequency. For example, if I have to predict whether it will rain in Marrakesh tomorrow, I can google past weather data and learn that it rained 59 days last year. That gives me an estimate of the relative frequency of rainy days in this part of Morocco: about 10%.

Scientists call this historical frequency *the base rate—or the frequency at which something tends to happen, on average.* Base rates are an important concept in forecasting, and we’ll have more to say about them later.

But first, a word on how *not* to think about probabilities. By default, most people treat probability as a dial with just three settings: 0% (“not a chance !”); 100% (“a sure thing”); and 50/50 (“a complete toss-up”). But research shows that the best forecasters tend to use a wide range of probabilities when making their forecasts. Good forecasters are able to distinguish between a 50% and 45%, or between 69% and 75%, and so on. They distinguish the many “degrees” of probability, and so should you.

So what can we infer from a forecast that there is “80% chance” that it will be sunny tomorrow?

For starters, it means there’s a 20% chance that it *won’t* be sunny.

That’s the thing about probabilities: for any set of mutually exclusive outcomes (e.g., either it WILL be sunny tomorrow or it WON’T), the probabilities must always sum to 100%.

Now suppose I forecast that there’s a 80% chance that it will be sunny tomorrow, but then it rains. Was I wrong? Well… I could argue that I also predicted a 20% chance that it *wouldn’t* be sunny – so wouldn’t you agree that I was at also little bit right?

Professional forecasters use an accuracy metric called the *Brier score* to measure the error, or the degree of “wrongness”, of a probability forecast.

This chart shows what the forecast error curve looks like depending on the probability you assigned to an event that eventually happened. If you gave it 100% chances, then your error is 0. However, if you gave it 0% chances of happening, then your error is 2. That’s the maximum Brier score error.

So the more chances you gave the event of occurring, the smaller your error is. The mirror curve plots your score in case the event doesn’t happen. The error is then highest if you gave 100% chances to a non-occuring event. Note, however, that the error curve isn’t symmetric around the 50-50 probability forecast. Which means you can lose more on the wrong side of 50-50 than you can win on the right side of 50-50. This is meant to penalize over confidence while rewarding humility instead.

Learn about the 4 habits of champion forecasters in our next video “the ABC’s of great forecasting

Our tools augment your collective intelligence so you can better anticipate, decide and innovate.

Quickly identify your best ideas thanks to our augmented brainstorming platform

Get accurate predictions from your own crowd with our forecasting platform

Consult our panel of vetted forecasters to anticipate complex events

Discover the science of collective intelligence with Emile Servan-Schreiber