How crowd forecasting can help anticipate infectious disease outbreaks

Collective intelligence pushes the limits of forecasting. Crowd forecasting shines when too many variables are involved for a single expert to handle, or when there is too little data to feed an artificial intelligence.

How do we know this? Before and during the Covid-19 pandemic, Hypermind teamed up with the Johns Hopkins Center for Health Security in a large-scale research study aiming to test the epidemiological forecasting skills of several hundred public health experts and other medical professionals.

The results were astonishing!

  • Most individual experts forecast as accurately as a dart throwing monkey.
  • But the crowd’s forecasts outperforms even the best individual forecasters in the crowd.
  • Reality seemingly aligns itself with the collective forecast.

Table of Contents

Community of forecasters

Our international community of thousands of minds makes numerical predictions on specific issues.

Prediction market + algorithm

Our prediction markets and proprietary algorithms combine their diverging perspectives according to the science of collective intelligence.

Reliable forecasts

Anticipate strategic issues: business environment, KPI, geopolitical events, economy.

Prediction is the essence of intelligence

It is easy to make fun of people trying to predict the future, but in fact all of us do it all the time. Predictions are essential to our ability to navigate a world where uncertainty is everywhere. The decisions you make, in your life, for your business, or for your country, cannot be smart unless they are informed by reliable predictions. That is why human brains are wired to make predictions all the time.

Cognitive scientists, such as Yann Le Cun, the artificial-intelligence expert who co-invented deep-learning algorithms, often says that prediction is the essence of intelligence itself.

So if every brain is a forecasting machine, what do you think happens when many brains try to make predictions together? That’s right. They become a super forecasting machine. That is the promise of so-called “crowd forecasting”: using the wisdom of crowds to predict the future.

Prediction markets vs prediction polls

Crowd forecasting usually takes place on a prediction market or a prediction poll, each method having its advantages and weaknesses.

The two methods yield similar results in terms of prediction accuracy, but prediction polls are easier for most people to participate in because they don’t require you to be familiar with financial markets.


Prediction markets

A prediction market is an online betting platform where people buy and sell predictions from each other. 

It looks and feels like a financial market, but instead of trading company stocks, participants trade predictions that end up being right or wrong. Shares of correct predictions will eventually be paid 100 points, while shares of wrong predictions will be worth none. A prediction’s “market price” measures a its probability of coming true, according to the many diverging opinions of a crowd of forecasters.


Prediction polls

A “prediction poll” is a contest where participants are competing to give the most accurate probabilities for future events. 

Each person shares their probability forecasts without a central marketplace. Sometimes it’s useful to show forecasters what the crowd thinks before they make their own estimate.

Then, smart algorithms consolidate and optimizes everyone’s guesswork into a reliable collective forecast.

Crowd forecasting infectious disease with Johns Hopkins

To feel crowd forecasting’s potential, you need to see it in action. Consider this experiment in disease prediction that was run in 2019 and 2020 by the Center for Health Security at Johns Hopkins University in collaboration with Hypermind. The goal was to predict infectious disease outbreaks around the world. So we recruited hundreds of public-health experts, other medical professionals, and also some champion forecasters from the Hypermind prediction market. In all, 70% of the participants had some professional medical background.

Over 15 months, before after the COVID-19 outbreak, we asked them to predict the severity of outbreaks for 19 infectious diseases. We asked 61 questions that had a total of 217 possible answers. So they had to give probabilities for 217 predictions, only 61 of which would come true.

For example, at the beginning of the COVID-19 pandemic, we asked “How many WHO member states will report more than 1000 confirmed cases of COVID-19 before April 2, 2020?”, and there were 4 possible answers: Less than 15, 16 to 30,  31 to 45, or more than 45.


Individual forecasters performed poorly, although most were health experts

We then measured the average prediction error of each forecaster over all the question, and this is what it looks like. In this graph every dot is one forecaster. The graph plots the prediction error, so the higher a dot is, the worst that forecaster is.

These higher dots are the least accurate forecasters, while these lower points are the most accurate.

You can see that most people tend to cluster around this level of performance, which is the level of “blind chance”. That’s the same error a monkey would achieve by throwing darts at the answers.

Except our participants were not monkeys, but for the most part, medical professionals… Yet only very few of them could make substantially better forecasts than a monkey, and many did much worse.

The enhanced crowd forecast outperformed all experts during our infectious disease prediction contest.

So what that means is that, individually, the forecasters are generally pretty bad. That’s because the predictions are very difficult and no one has all the information or expertise necessary to be correct most of the time. However, if you simply average the forecasts from everyone, that crowd forecast is substantially better that most individuals.

In fact only 6 individuals managed to beat the crowd, so the basic collective forecast beat 99% of the individuals.

Furthermore, when you enhance the crowd aggregation with a few simple statistical transformations, it outperforms all individuals: the crowd is a better forecaster than the best forecaster in the crowd.

"The crowd is a better forecaster than the best individual forecaster in the crowd."

How accurate is the crowd's prediction ? Calculating market calibration

Another way to check the accuracy of the crowd forecasts is to ask: “how many outcomes forecasted with probability p did occur?”

To test this, we gathered all the outcomes that were given, say, 30% chance of happening, and then we looked at how many of those outcomes did in fact occur. We found that about 30% of them did occur.

We looked at all the outcomes that were forecasted to have 50% chance of happening, and we found that half of them did occur.

We looked at all the outcomes given 80% chance of happening, and found that 80% did happen. You get the idea.

There was an almost perfect alignment between the predictions and reality at every level of probability. Here’s our calibration data on over 500 000 forecasts on geopolitical, economic and business prediction questions, and for the 61 questions we asked health experts in our experiment with Johns Hopkins.

Hypermind prediction market calibration graph
Here's our data on over 500 000 forecasts on geopolitical, economic and business prediction questions.
We see the same alignement between predicted probabilities and observed frequencies in our disease prediction experiment (61 questions, 217 possible answers on 19 infectious diseases)

Predict business KPIs, geopolitical events and disease outbreaks

It’s not just us. This same amazing calibration pattern has been observed in various domains where crowd forecasting has been applied. 

For example when Google asked 1,500 employees to forecast business-relevant outcomes like new product launches or sales forecasts, or when Hypermind analyzed half a million predictions made in its prediction market on 400 geopolitical and economic questions over 5 years.

What that means is not that crowd forecasts can tell you with certainty what will or won’t happen in the future. Only divine beings could do that, perhaps.

But it reveals the uncanny ability of collective intelligence to discern the true chances that something will or won’t happen. It’s not quite divine, but it’s powerful nonetheless.

So if predictions are indeed essential to our ability to navigate a world where uncertainty is everywhere, crowd forecasting is a key resource to help make large organizations, businesses and governments as smart as they can be.