We aim to investigate how statistical tools can help detect potential data fabrication in the social-and medical sciences. In this proposal we outline three projects to assess the value of such statistical tools to detect potential data fabrication and make the first steps in order to apply them automatically to detect data anomalies, potentially due to data fabrication. In Project 1, we examine the performance of statistical methods to detect data fabrication in a mixture of genuine and fabricated data sets, where the fabricated data sets are generated by actual researchers who participate in our study. We also interview these researchers in order to investigate, in Project 2, different data fabrication characteristics and whether data generated with certain characteristics are better detected with current statistical tools than others. In Project 3 we use software to semi-automatically screen research articles to detect data anomalies that are potentially due to fabrication, and develop and test new software forming the basis for automated screening of research articles for data anomalies, potentially due to data fabrication, in the future.
Problem statementThere is a clear need to develop and validate statistical tools to detect (potential) data fabrication in the social-and medical sciences. 2% of researchers admit to have either falsified or fabricated research data once in their professional career (Fanelli 2009), but only a dozen cases are discovered in the U.S. and the Netherlands per year, while combined they cover ~300,000 researchers. Arguably, then, the cases that are discovered are only the tip of the iceberg.Considering that data fabrication undermines the epistemological pursuit of science and negatively affects the validity of published findings, fairness in the scientific reward system, and trust in science, it is important to improve its detection. In the last decade, journals have started using automated tools as a screening device to detect image manipulation and plagiarism in submitted or accepted articles. Numerous case studies (e.g., in the Journal of Cell Biology) suggest that these screening tools are effective and useful to detect various forms of research misconduct. In the social-and medical sciences, image detection tools are practically useless because data are primarily quantitative and based on observations of behavior, questionnaires, (cognitive) tests, etc. Despite their potential use as a screening device, there are currently no well-established tools to detect data anomalies (potentially) due to fabrication of quantitative data.Statistical tools to detect data fabrication have been successful in several ad hoc investigations in the social-and medical sciences, of which the Diederik Stapel case is perhaps the most well known. As in the Fuji case in anesthesiology (Carlisle 2012), statistical results reported in the articles of Stapel allowed for statistical tests that indicated his results were too good to be true (Levelt Committee et al. 2012). Similarly, the raw data...