Efforts to enhance reproducibility in a human performance research project [version 1; peer review: 2 approved with reservations]
Background: Ensuring the validity of results from funded programs is a critical concern for agencies that sponsor biological research. In recent years, the open science movement has sought to promote reproducibility by encouraging sharing not only of finished manuscripts but also of data and code su...
Saved in:
Published in | F1000 research Vol. 12; p. 1430 |
---|---|
Main Authors | , , , , , , , , , , , |
Format | Journal Article |
Language | English |
Published |
England
Faculty of 1000 Ltd
2023
F1000Research F1000 Research Limited F1000 Research Ltd |
Subjects | |
Online Access | Get full text |
ISSN | 2046-1402 2046-1402 |
DOI | 10.12688/f1000research.140735.1 |
Cover
Summary: | Background: Ensuring the validity of results from funded programs is a critical concern for agencies that sponsor biological research. In recent years, the open science movement has sought to promote reproducibility by encouraging sharing not only of finished manuscripts but also of data and code supporting their findings. While these innovations have lent support to third-party efforts to replicate calculations underlying key results in the scientific literature, fields of inquiry where privacy considerations or other sensitivities preclude the broad distribution of raw data or analysis may require a more targeted approach to promote the quality of research output.
Methods: We describe efforts oriented toward this goal that were implemented in one human performance research program, Measuring Biological Aptitude, organized by the Defense Advanced Research Project Agency's Biological Technologies Office. Our team implemented a four-pronged independent verification and validation (IV&V) strategy including 1) a centralized data storage and exchange platform, 2) quality assurance and quality control (QA/QC) of data collection, 3) test and evaluation of performer models, and 4) an archival software and data repository.
Results: Our IV&V plan was carried out with assistance from both the funding agency and participating teams of researchers. QA/QC of data acquisition aided in process improvement and the flagging of experimental errors. Holdout validation set tests provided an independent gauge of model performance.
Conclusions: In circumstances that do not support a fully open approach to scientific criticism, standing up independent teams to cross-check and validate the results generated by primary investigators can be an important tool to promote reproducibility of results. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 14 content type line 23 LLNL-JRNL-849393 Defense Advanced Research Projects Agency (DARPA) USDOE National Nuclear Security Administration (NNSA) AC52-07NA27344; HR001119S0021-MBA-FP-004 No competing interests were disclosed. |
ISSN: | 2046-1402 2046-1402 |
DOI: | 10.12688/f1000research.140735.1 |