How do we know this? Before and during the Covid-19 pandemic, Hypermind teamed up with the Johns Hopkins Center for Health Security in a large-scale research study aiming to test the epidemiological forecasting skills of several hundred public health experts and other medical professionals.
The results were astonishing:
In a joint study, Hypermind and Johns Hopkins set up a large-scale pre-pandemic experiment to forecast infectious-disease outbreaks (read our in depth peer-reviewed publication).
The goal of the study was to develop an evidence base for the use of crowd-sourced forecasting as a way to confidently provide information to decision makers in order to supplement traditional surveillance efforts and improve response to infectious disease emergencies.
Over the course of 15 months, from January 2019 to march 2020, we pitted 562 forecasters against one another to predict outcomes on 19 different diseases such as Ebola, cholera, influenza, dengue, and eventually Covid-19.
No Data Found
Example forecasting question:
“How many WHO member states will report more than 1000 confirmed cases of COVID-19 before April 2, 2020?”
Less than 15
16 to 30
31 to 45
more than 45
We measured the average prediction error for each forecaster over all the contest’s questions, and this is what it looks like.
In this graph every dot is one forecaster. The graph plots the prediction error, so the higher a dot is, the worst that forecaster is. These higher dots are the least accurate forecasters, while these lower points are the most accurate.
Forecasting a complex problem like infectious disease is hard, very hard.
Most participants cluster around the level of error of “blind chance”: it’s the accuracy you would expect from the proverbial dart-throwing monkey picking answers at random.
Although most participants were medical professionals, only very few of them produced substantially better forecasts than our theoretical dart throwing chimp, and many did much worse.
Simply averaging individual forecasts produces an aggregate crowd forecast (in pink) that outperformed all but 6 participants (99% of forecasters).
When enhanced by a few intuitive statistical transformations, the crowd forecasts (below in red) outperformed even the best forecaster in the crowd.
In other words, the smartest forecaster on disease prediction is not a person, but a crowd. It is also notable that the crowd of skilled forecasters with no particular domain expertise were just as accurate as the crowd of public-health experts.
The outcome probabilities forecasted by the crowd were also well “calibrated”, in the sense that they were closely correlated with the actual outcome frequencies in the real world : about 20% of all outcomes forecasted with 20% probability did occur, while 80% of all outcomes forecasted with probability 80% did occur, and so on at every level of probability.
Of course, some the best forecasters were both domain experts and reliable generalist forecasters, meaning that expertise still matters, but forecasting skill matters just as much.
Do not trust individual experts, only trust crowds of experts
Crowds of skilled forecasters are just as accurate as experts
Leverage whichever is most easily available and affordable: experts or skilled forecasters, or both !
It is easy to make fun of people trying to predict the future, but in fact all of us do it all the time. Predictions are essential to our ability to navigate a world where uncertainty is everywhere. The decisions you make, in your life, for your business, or for your country, cannot be smart unless they are informed by reliable predictions. That is why human brains are wired to make predictions all the time.
Cognitive scientists, such as Yann Le Cun, the artificial-intelligence expert who co-invented deep-learning algorithms, often says that prediction is the essence of intelligence itself.
So if every brain is a forecasting machine, what happens when many brains try to make predictions together? They become a super forecasting machine. This is the promise of so-called “crowd forecasting”: using the wisdom of crowds to predict the future.
Crowd forecasting usually takes place on a prediction market or a prediction poll, each method having its advantages and weaknesses.
The two methods yield similar results in terms of prediction accuracy, but prediction polls are easier for most people to participate in because they don’t require you to be familiar with financial markets.
A prediction market is an online betting platform where people buy and sell predictions from each other.
It looks and feels like a financial market, but instead of trading company stocks, participants trade predictions that end up being right or wrong. Shares of correct predictions will eventually be paid 100 points, while shares of wrong predictions will be worth none. A prediction’s “market price” measures a its probability of coming true, according to the many diverging opinions of a crowd of forecasters.
A “prediction poll” is a contest where participants are competing to give the most accurate probabilities for future events.
Each person shares their probability forecasts without a central marketplace. Sometimes it’s useful to show forecasters what the crowd thinks before they make their own estimate.
Then, smart algorithms consolidate and optimizes everyone’s guesswork into a reliable collective forecast.