In 2010, two famous economists, Carmen Reinhart and Kenneth Rogoff, released a paper confirming what many fiscally conservative politicians had long suspected: that a country’s economic growth tanks if public debt rises above a certain percentage of GDP. The paper fell on the receptive ears of the UK’s soon-to-be chancellor, George Osborne, who cited it multiple times in a speech setting out what would become the political playbook of the austerity era: slash public services in order to pay down the national debt.
There was just one problem with Reinhart and Rogoff’s paper. They’d inadvertently missed five countries out of their analysis: running the numbers on just 15 countries instead of the 20 they thought they’d selected in their spreadsheet. When some lesser-known economists adjusted for this error, and a few other irregularities, the most attention-grabbing part of the results disappeared. The relationship between debt and GDP was still there, but the effects of high debt were more subtle than the drastic cliff-edge alluded to in Osborne’s speech.
Scientists—like the rest of us—are not immune to errors. “It’s clear that errors are everywhere, and a small portion of these errors will change the conclusions of papers,” says Malte Elson, a professor at the University of Bern in Switzerland who studies, among other things, research methods. The issue is that there aren’t many people who are looking for these errors. Reinhart and Rogoff’s mistakes were only discovered in 2013 by an economics student whose professors had asked his class to try to replicate the findings in prominent economics papers.
With his fellow meta-science researchers Ruben Arslan and Ian Hussey, Elson has set up a way to systematically find errors in scientific research. The project—called ERROR—is modeled on bug bounties in the software industry, where hackers are rewarded for finding errors in code. In Elson’s project, researchers are paid to trawl papers for possible errors and awarded bonuses for every verified mistake they discover.
The idea came from a discussion between Elson and Arslan, who encourages scientists to find errors in his own work by offering to buy them a beer if they identify a typo (capped at three per paper) and €400 ($430) for an error that changes the paper’s main conclusion. “We were both aware of papers in our respective fields that were totally flawed because of provable errors, but it was extremely difficult to correct the record,” says Elson. All these public errors could pose a big problem, Elson reasoned. If a PhD researcher spent her degree pursuing a result that turned out to be an error, that could amount to tens of thousands of wasted dollars.
Error-checking isn’t a standard part of publishing scientific papers, says Hussey, a meta-science researcher at Elson’s lab in Bern. When a paper is accepted by a scientific journal—such as Nature or Science–it is sent to a few experts in the field who offer their opinions on whether the paper is high-quality, logically sound, and makes a valuable contribution to the field. These peer-reviewers, however, typically don’t check for errors and in most cases won’t have access to the raw data or code that they’d need to root out mistakes.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearThe end result is that published science is littered with all kinds of very human errors—like copying the wrong value into a form, failing to squash a coding bug, or missing rows in a spreadsheet. The ERROR project pairs authors of influential scientific papers with reviewers who go through their work looking for errors. Reviewers get paid up to 1,000 Swiss francs ($1,131) to review a paper and earn bonuses for identifying minor, moderate, and major errors. The original authors are also paid for submitting their paper. ERROR has 250,000 Swiss francs from the University of Bern to pay out over four years, which should be enough for about 100 papers.
Jan Wessel, a cognitive neuroscientist at the University of Iowa, was the first scientist to have his work checked by ERROR. Elson already knew Wessel, and asked the researcher whether he’d like to take part in the project. Wessel agreed, on the proviso that he submitted a paper where he was the lone author. If they found a major error, Wessel wanted it to be clear that it was his mistake alone, and not risk jeopardizing the career of a colleague or former student.
Wessel offered a paper he’d published in 2018 and was paired with Stanford neuroscientist Russ Poldrack, who checked the paper for errors. Wessel’s paper was about a common test used in neuroscience to test impulsivity and inhibition, and involved taking data from lots of other published studies to see how different versions of that test compare. In his review of Wessel’s study, Poldrack found that the neuroscientist had occasionally made mistakes when he extracted the data from those preexisting studies. The errors weren’t enough to skew the results of the paper, but Wessel was still surprised at just how many there were.
“I was shocked at the amount of errors that Russ found,” says Wessel. Poldrack only sampled a small subset of the 241 papers covered in Wessel’s paper, so the neuroscientist decided to go back and find the true error rate. Wessel asked his lab colleagues to go back through all the remaining papers and check for instances where the figure in an original paper didn’t match the one that Wessel had put in his work. They found that for one variable, Wessel had recorded the incorrect value about 9 percent of the time.
What was even more interesting were the mistakes that Wessel’s error-hunting researchers made. Even though they knew they were looking out for errors, Wessel’s colleagues made errors at an ever greater rate—nearly 13 percent of the time. Like Wessel, they copied down the wrong number or misread a value in a paper. Wessel was so intrigued by this that he ran an analysis to see how likely it was that two people would make the exact same mistake on any one of the papers they examined. He discovered there was a more than 50 percent chance that would happen at some point across the 241 papers.
Most PopularPS5 vs PS5 Slim: What’s the Difference, and Which One Should You Get?By Eric Ravenscraft Gear13 Great Couches You Can Order OnlineBy Louryn Strampe GearThe Best Portable Power StationsBy Simon Hill GearThe Best Wireless Earbuds for Working OutBy Adrienne So
GearFor meta-science researchers, none of this is surprising. If you look hard enough, you’ll start to find errors everywhere. As more researchers have started to work with very large datasets and complex code, the potential for errors has increased, says Poldrack. One potential issue is that if a bug in some code leads to a particularly interesting scientific result—the kind that could turn into a great research paper—there is no real incentive for scientists to squash that bug. The only reward for their diligence would be scrapping the research. No bug, no paper.
Changing the culture around scientific error could make it less likely for mistakes to end up in published work. Surgeons dissect their mistakes in morbidity and mortality meetings, which are supposed to be a judgment-free space where doctors figure out how to stop the same situation happening again. The same is true of plane crash investigations. The Convention on International Civil Aviation states that the purpose of investigations is not to apportion blame, but to figure out how to prevent similar accidents in the future.
Error-checking needs to be rewarded, says Brian Nosek, executive director of the nonprofit Center for Open Science (COS) and a member of the ERROR advisory board. “It could be something that is a career-enhancing prospect.” The COS is currently partnering with ERROR as part of a trial called Lifecycle Journals that is aimed at making scientific publishing more transparent and rigorous.
Even the most assiduous error-hunters are limited by their access to data. The errors in the Reinhart and Rogoff paper were discovered only after the economists shared their working spreadsheet with curious researchers. Elson is targeting authors of influential psychology papers, but they’re not obliged to take part. Only between a quarter and a third of researchers he emails actually reply, estimates Elson. It could be that scientists who suspect their work is full of errors simply choose not to expose themselves to that kind of scrutiny, skewing the results of the project.
Elson and Hussey know their project is open to these kinds of biases. They see it as more of a work-in-progress, an example of how error-correction might be incorporated into the scientific process. Elson says that the approach might be of interest to journals, universities, and particularly funders, who are keen to see that the work they’re paying for has the impact they hope.
“My hope for this is that it goes from something that was unimaginable until it happened, and then it was unthinkable not to do it,” says Hussey. “You have to give people the permission, and incentive, to think in the first place that errors might exist.”
Updated 6-21-2024 3:00 pm ET: The spelling of Ruben Arslan’s name was corrected.