The National Institute of Replicating Discoveries, Y’all (NIRDY)

Sometimes when you say something on Twitter people respond. People don’t respond that much to what I have to say but, now and then, there’s enough of a reaction to help me realize that an idea was meaningful beyond the moment it popped into my mind and made its way onto the keyboard. So, thanks to those people for starting the conversation.

The idea I had today is that some scientific disciplines could benefit from more replication. And what better way to do it than to have Big Brother audit your science and see if they can replicate in their lab what you did in yours. The idea stemmed from my own feelings about my field. I’ve had serious thoughts lately about trying to replicate a couple findings that have had a lot of influence in the field. They’re important findings. They reveal key functions of new neurons that could be relevant for human health. For this reason the whole field is aware of them, cites them, uses them as justification for additional research. Sooo, then why haven’t they been replicated?  

Pretty much since day one (of my science career) it’s been clear that fascinating studies appear and steer the entire field, usually in a good way. However, sometimes they’re such a perfect story that they go unquestioned or untested. And so, in thinking about contemporary examples, I’ve realized that I would really like to replicate them myself. Just to see, to know. Not because I don’t believe them, not to build on them (necessarily), but because I think it’s important to know whether it’s really the case, whether confirming or failing to confirm. I think science is ripe for this, given journals like PLoS ONE, whose aim is not to provide “interesting” science, but valid and useful science. And even if not in a traditional publication, there are new tools like Figshare that allow for datasets of all sizes to be archived, shared and cited with persistent identifiers.

But can you make a living by replicating studies? I thought about how lucky I am that I could pursue a project (mini-project?) whose aim is simply to replicate others’ work. This might not be as easy when I get my own lab. Do such grants exist?

That’s when NIRDY came to mind, the National Institute of Replicating Discoveries Y’all. This is a non-existent governmental research institution (NEGRI) that replicates a sampling of important scientific studies. Instead of only funding research, shouldn’t the government go back and doublecheck? Wouldn’t it be funny if the government couldn’t replicate any of the research they fund? Wouldn’t it be great if they could?

Factors to be considered when creating NIRDY

  • How would they pick studies to replicate? Well, they’d be smart, first off. Maybe they could poll scientific communities, see which studies are getting cited the most, discussed the most on social networks, are immediately important for human health. Or go after everything published in Nature or Science.
  • What effect would this have on future publications? Would scientists be more careful when publishing exciting results, to ensure validity?
  • If NIRDY is ever born, can I get a job? Seriously, some of us get a kick out of systematically manipulating dozens of variables to figure out the exact conditions that are required to observe a scientific phenomenon. This relates to the question of how much replication must occur. One answer might be: until the original finding is replicated. Replication could happen on the first try. If not, maybe additional experiments are worthwhile. This was a problem with the role of neurogenesis in fear conditioning – very confusing until it was recently totally solved (though maybe I should replicate the entire study just to make sure).
  • It can be very hard to replicate findings even with relatively standard techniques. Then what do we do about experiments where the technology is so advanced that very few labs are capable of replicating them? Sounds like a job for NIRDY.
  • This could also all be good for reagent-sharing. I can think of a number of classic papers that used fancy mice and in the years, nay decades, since I can’t recall ever seeing those mice used by another group or in another paper.

And what happens if a study simply cannot be replicated?

Jail, obviously.

—————————————-

Update: Alex Wiltschko had an interesting point, that replication could be a useful method for training scientists (or driving them batty):

In terms of career logistics, this makes a lot of sense for techs and postdocs, who get to thoroughly learn several state-of-the-art techniques during an experiment replication.

6 Comments:

  1. I love this idea. I’ve always been frustrated with the lack of replication too. Plus, the acronym is just too awesome; how could any scientist NOT be on board with NIRDY?

  2. It’s not all grunt work either – NIRDY (love the acronym) could be responsible for both replication AND alternative directions/explanations. There are a lot of glam-mag publications with data accompanied by a great story that becomes dogma, but sometimes there are plausible, testable alternative explanations…

  3. Hilarious and insightful. Love the comment that to truly replicate some studies the mice should be re-used.
    I sent this to the top 80+ consciousness researchers in the world.
    Melvin L Morse MD

  4. This is an interesting idea. I do not know if it would slow down the progress of science or speed it up. On the other hand, it may take longer for science to progress, but, in the long run, it may make more accurate predictions and squeeze out the more reliable findings… but is it worth the time? If the government did this, how would it impact clinical trials of novel experimental procedures/drugs or even research on a basic mechanistic approaches? The NIH (/collectively/) has a responsibility of, yes, correcting science where needed but also producing novel science (maybe dedicate a branch of NIH to NIRDY)

    Of course you do have some reproducing occurring of older experiments in which the new build atop. It’s the experiments where the methodology is rather confusing, whether it be technique or # of experiments needed to make a point that are not likely to be reproduced. Rather, the simple, yet elegant experiments become more reliable, either because they are simple to do in the lab OR they cut straight to the point of the research question at hand and the new need to show “this” to build atop.

    Jason, is this a more filling comment as compared to my others?

  5. The idea is good. However, a huge obstacle is ahead: The way science is being done and the “rewards” are all flawed. Why? Because for scientists, the only way to “survive” is by publishing so-called “original science”. But who decided that? The Journals and the editors of those journals, who in turn are worried about one thing: Impact Factor. And who invented the impact factor? The ISI (Institute for Scientific Information). Do you know who owns ISI? Thomson Reuters! Yes, that infamous ISI (and Thomson Reuters by extension) is the very diabolic influence that has corrupted the whole science field, by putting tags, ranking by numbers about everything they want. So here we are, all scientists struggling to produce papers describing new “discoveries” because that is what gets published! No journal is really interested in publishing replication works. Period. Or when that happens, is because the authors have been clever or because the journal in question has nothing better to publish (read: “discovery”)! This frantic pace obviously led to publication of dubious results and flawed experiments with wrong conclusions, as it was pointed out by Ioannidis:
    http://www.theatlantic.com/magazine/print/2010/11/lies-damned-lies-and-medical-science/8269/

    We need a revolution. We need change, but the system is corrupted to the core.

    • Yeah I appreciate your frustration but don’t think it’s so dire. The situation is bad but I think enough people like you have identified the problems exist and now, in fact, you do have journals that will publish findings that are “merely” replications and don’t decide the publishability of a paper based on arbitrary judgements of interestingness (e.g. PLoS ONE). You also have alternative means for publishing data online in a non-peer reviewed manner, and slowly tools are being developed that allow for tracking these types of contributions (thus, incentives aside from impact factor are emerging).

Leave a Reply