An illustration with a repeated pattern of many identical yellow file folders on a green background.

Credit: Carol Yepes/Getty

Studies that try to replicate the findings of published research are hard to come by: it can be difficult to find funders to support them and journals to publish them. And when these papers do get published, it’s not easy to locate them, because they are rarely linked to the original studies.

A database described in a preprint posted in April1 aims to address these issues by hosting replication studies from the social sciences and making them more traceable and discoverable. It was launched as part of the Framework for Open and Reproducible Research Training (FORTT), a community-driven initiative that teaches principles of open science and reproducibility to researchers.

The initiative follows other efforts to improve the accessibility of replication work in science, such as the Institute for Replication, which hosts a database listing studies published in selected economics and politics journals that academics can choose to replicate.

The team behind the FORTT database hopes that it will draw more attention to replication studies, which it argues is a fundamental part of science. The database can be accessed through the web application Shiny, and will soon be available on the FORTT website.

Nature Index spoke to one of the project’s leaders, Lukas Röseler, a metascience researcher and director of the University of Münster’s Center for Open Science in Germany.

Why did you create this database?

We’re trying to make it easier for researchers to make their replication attempts public, because it’s often difficult to publish them, regardless of their outcome.

We also wanted to make it easier to track replication studies. If you’re building on previous research and want to check whether replication studies have already been done, it’s often difficult to find them, partly because journals tend to not link them to the original work.

We started out with psychology, which has been hit hard by the replication crisis, and have branched out to studies in judgement and decision-making, marketing and medicine. We are now looking into other fields to understand how their researchers conduct replication studies and what replication means in those contexts.

Who might want to use the database?

A mentor of mine wrote a textbook on social psychology and said that he had no easy way of screening his 50 pages of references for replication attempts. Now, he can enter his references into our database and check which studies have been replicated.

The database can also be used to determine the effectiveness of certain procedures by tracking the replication history of studies. Nowadays, for instance, academics are expected to pre-register their studies — publishing their research design, hypotheses and analysis plans before conducting the study — and make their data freely available online. We would like to empirically see whether interventions such as these affect how likely a study is to be replicable.

How is the database updated?

It is currently an online spreadsheet, which we created by manually adding the original findings, their replication studies and their outcomes. So far, we have more than 3,300 entries — or replication findings — of just under 1,100 original studies. There are often multiple findings in one study; a replication study might include attempts to replicate four different findings, constituting four entries.

There are hundreds of volunteers who are collecting replications and logging studies on the spreadsheet. You can either just enter a study so that it’s findable, or include both the original study and the replication findings.

We are in contact with teams that conduct a lot of replication research, and we regularly issue calls for people to add their studies. This is a crowdsourced effort and a large proportion of it is based on the FORTT replications and reverses project, which is also crowdsourced. It aims to collate replications and ‘reversal effects’ in social science, in which replication attempts have results in the opposite direction compared with the original.

Do you plan to automate this process?

We are absolutely looking into ways to automate this. For instance, we are working on a machine-readable manuscript template, in which people can enter their manuscript and have it automatically read into the database.

We have code that automatically recognizes DOIs and cross checks them with all the original studies in the database to check whether there is a match. We are working on turning this into a search engine, but it’s beyond our capabilities and resources at the moment.

Does your database provide any data on the replications it hosts?

If you go to our website, there is a replication tracker, where you can see the percentage of studies that were able to replicate original findings, and those that failed to do so.

In a version of the database that we will launch in the coming months, users will be able to choose the criteria by which they judge whether a study successfully replicated the original findings. Right now, it’s all based on how strong the effect sizes — a measure of the relationship between two variables — were on both the original study and the replication attempts, but there are many other criteria and metrics of replication success that we are considering.

We’re also planning to launch a peer-reviewed, open-access journal at FORTT to publish replication studies from various disciplines.

This interview has been edited for length and clarity.

Nature Index’s news and supplement content is editorially independent of its publisher, Springer Nature. For more information about Nature Index, see the homepage.



Source link


administrator