No Wisdom in Crowds? One Head May Be Better Than Two or 22

0
168

Is a better decision made by the crowd or a few experts, like the people on the U.S. Supreme Court? Nikki Kahn/The Washington Post via Getty Images“Is a better decision made by the crowd or a few experts, like the people on the U.S. Supreme Court? Nikki Kahn/The Washington Post via Getty Images

More than a century ago, English polymath Sir Francis Galton set out to demonstrate the ignorance of the masses and accidently proved the wisdom of crowds. As writer James Surowiecki recounted in his 2005 book, appropriately titled "The Wisdom of Crowds," Galton attended a livestock fair where locals were asked to guess the weight of an ox. Galton collected their guesses intending to show that not one of the 800 submissions was correct, which was true. But when Galton graphed the distribution of the wrong answers, he made an unsettling discovery — the mean (or average) of the 800 answers was exactly right: 1,197 pounds (543 kilograms).

According to the "wisdom of crowds" theory, the more individual data points you collect, the more accurate your final answer. So if two heads are better than one, then 3 or 4 or 2,451,897 are much, much better. By that logic, we should scrap the Supreme Court and the Federal Reserve and submit every important decision to a mass email survey.

Then again, maybe there’s a limit to our collective wisdom.

Mirta Galesic is a professor of human social dynamics at the Santa Fe Institute in New Mexico, where she studies how people make decisions, particularly within groups and social networks. In a fascinating 2016 paper published in the journal Decision, Galesic not only proves large crowds can often get it wrong, but that sometimes one randomly selected head is better than a hundred.   

Why Bigger Isn’t Always Better

Galesic and her colleagues at Berlin’s Max Planck Institute for Human Development first questioned the "wisdom of the crowd" theory when they noticed that many of the world’s most important decisions are made by moderately sized groups.

"A jury in most countries is six to 15 people and central bank boards are five to 12 people," Galesic says. "If the wisdom of crowds is so important, why don’t we make these groups much larger? Why don’t we use the internet and online teleconferencing to have juries with 150 people?"

Because, as Galesic discovered through her research, not all questions are equal. First, the wisdom of crowds can only be tested using quantitative questions with a single right answer —how many gumballs are in the jar? Or which of these two candidates will win the election? It doesn’t work, for example, with juries, because you can never be sure if a verdict of innocent or guilty was ultimately right or wrong. Some guilty people really do get away with murder.

The wisdom of crowds also doesn’t apply to popular referenda like the "Brexit" vote or state polls to legalize gay marriage, Galesic explains. "Those kinds of questions are a matter of personal preference. There are people who, even after the fact, will never agree that a popular-vote decision was wrong or right." 

Second, the wisdom of crowds falls short when the question is really, really hard. One of the most surprising conclusions of Galesic’s paper is that for certain highly difficult decisions, you’d be better off asking one or two random experts than polling a hundred of them. But why?

Galesic used statistical modeling and computer simulations to analyze the results of experiments where groups of experts weighed in on quantitative questions. A panel of physicians, for example, was asked to diagnose a hypothetical patient exhibiting a set of symptoms. Economists were asked to forecast the unemployment rate for the next year. And political scientists were asked to predict the outcome of an election. 

When the task was easy — a common ailment for the doctors, or a landslide election for the political scientists — a larger sample set resulted in more accurate predictions, so the "wisdom of crowds" theory held true. But when the task proved difficult — as in the 2000 U.S. presidential election between George W. Bush and Al Gore — the opposite effect took hold.

"In the 2000 election, most of the experts got it wrong," says Galesic. "If you take the majority of a group that’s mostly wrong, you have a 100 percent chance of getting the wrong answer. In that case, you’d be better off choosing one expert randomly out of a hundred and maybe, by chance, you’d pick one who got it right."

The Right Number of Experts

Of course, in real life it’s impossible to know if the next task is going to be easy or hard. And the Fed can’t reasonably add 1,000 board members when the economy is running strong and cut back to two or three when we’re in a recession. That’s why Galesic’s computer models spit out on a solution that would make the Three Bears proud. How big should most committees be to accurately handle a variety of easy and hard tasks? Not too big and not too small — just right.  

For example, how many political scientists should you consult for the most accurate election predictions in any contest? Five. How many doctors do you need to get the most accurate diagnosis? Eleven. And how many economists do you need to most accurately predict macroeconomic shifts? Seven, which is precisely the number of seats on the Federal Reserve’s Board of Governors.

"Maybe that reflects some intuition about the optimum number of people in such committees," says Galesic. So, maybe the crowd isn’t so dumb after all. 

Now That’s Interesting

Sir Francis Galton’s fame and influence extend far beyond the ox experiment. In his meteorological studies, he created the first weather map. During his research into human intelligence and twins, he coined the term "nature and nurture." He also established the first fingerprint classification system. Sadly, Galton’s greatest passion was eugenics, and he worked tirelessly to prove a connection between race and moral character.

LEAVE A REPLY

Please enter your comment!
Please enter your name here