An overwhelming amount of research is published each year. The estimate for 2014 was 2.5 million articles, and that number is sure to be higher today. Still, the publication record is only a tiny slice of all the research data in existence around the world. Results that are inconclusive or challenge our assumptions are frequently hidden in lab notebooks, never to be shared. These data represent the "dark matter" of our research universe - the overwhelming majority of knowledge binding together the bright, shining, published articles.
But is the current system really best for science?
There are several consequences to keeping negative results hidden. For one, sharing a failed experiment may prevent a number of other research groups from wasting time and money on the same idea. Even if another lab wanted to try a similar experiment, they could make slight changes based on the previous attempt.
Second, the current tendency to focus on splashy results may be leading to false conclusions. In a famous 2005 paper in PLOS Medicine, John Ioannides even argued that most published findings are false. Imagine that the same experiment is repeated by 20 labs, but only one lab finds a significant result (say, p < 0.05). Then, that positive result is published. The other nineteen labs likely assume that they did something wrong, and ignore their negative findings. Looking at this from a different perspective, though, we might expect one significant result out of twenty by random chance alone, and the published result may in fact be spurious.
From the perspective of basic science, hiding negative results creates waste. But when considering clinical trials, lives may be on the line. The need to see the results of clinical trials, whether positive or negative, has led to the insistence that clinical trials be registered publicly. Unfortunately, while many clinical trials are eventually published, negative results (where the treatment being tested had no effect) are statistically less likely to be published. And negative trials results that are published come out about one year later than positive studies.
Unfortunately, despite the tens of thousands of available journals, places to send negative results are exceedingly scarce.
- The journal F1000Research, famous for posting articles prior to an open peer review, welcomes negative results.
- BMC Psychology was created largely in response to the chronic underreporting of negative results in that field*.
- Publishing giant PLOS ONE, the original megajournal, also accepts negative, null, or inconclusive results.
- A few scattered journals explicitly welcome negative results, including the Journal of Negative Results in Biomedicine and the fledgling All Results journals.
However, these journals are not highly utilized, and lower citation rates for negative studies [subscription required] likely make other journals wary of specifically requesting similar articles. A central publicly curated database of negative results, patterned after Wikipedia, might be a simple solution. If researchers do not receive credit toward tenure or job positions, however, there is little incentive to participate.
What do you think? Would you take the time to write up negative results if there were a simple template and some credit for your efforts? Are there better ways to make sure that research effort, time, and funds aren't wasted? Given the power of the internet and the recent focus on making research dollars count, it seems like there should be some solution for hosting negative data.
E-mail us with your thoughts - we’d love to hear them!
* Special thanks to Jon Brock (@DrBrocktagon) for mentioning PsychFileDrawer, a quick and easy way to report replication attempts in psychology. PsychFileDrawer is another great example of psychology researchers exploring new ways to ensure that all results are shared.