2019
DOI: 10.1027/2151-2604/a000356
|View full text |Cite
|
Sign up to set email alerts
|

Scientific Misconduct in Psychology

Abstract: Abstract. Spectacular cases of scientific misconduct have contributed to concerns about the validity of published results in psychology. In our systematic review, we identified 16 studies reporting prevalence estimates of scientific misconduct and questionable research practices (QRPs) in psychological research. Estimates from these studies varied due to differences in methods and scope. Unlike other disciplines, there was no reliable lower bound prevalence estimate of scientific misconduct based on identified… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
7
0
2

Year Published

2019
2019
2024
2024

Publication Types

Select...
6
2
1

Relationship

1
8

Authors

Journals

citations
Cited by 21 publications
(9 citation statements)
references
References 46 publications
0
7
0
2
Order By: Relevance
“…Rubbo (2017) found that majority of retractions (~ 65%) were issued by Editors in the field of engineering. Stricker and Günther ( 2019 ) found that 0.82 papers were retracted per 10,000 journal articles in the field of psychology due to scientific misconduct. According to Wang et al ( 2019 ), longest duration has taken for retraction due to fraud/suspected fraud while shortest duration for authorship disputes.…”
Section: Previous Studiesmentioning
confidence: 99%
“…Rubbo (2017) found that majority of retractions (~ 65%) were issued by Editors in the field of engineering. Stricker and Günther ( 2019 ) found that 0.82 papers were retracted per 10,000 journal articles in the field of psychology due to scientific misconduct. According to Wang et al ( 2019 ), longest duration has taken for retraction due to fraud/suspected fraud while shortest duration for authorship disputes.…”
Section: Previous Studiesmentioning
confidence: 99%
“…Thus, plain language summaries facilitate access to research outputs for the general public, which has been discussed as a promising avenue for sustaining trust in science (Grand et al, 2012;Pittinsky, 2015). We argue that sustaining trust in science is especially relevant in (social) psychology-a discipline which investigates topics of high relevance to the public, but which also struggles with a replicability crisis (Klein et al, 2018;Open Science Collaboration, 2015;Świątkowski & Dompnier, 2017) and has been stricken by several misconduct cases lately (Callaway, 2011;Stricker & Günther, 2019;Świątkowski & Dompnier, 2017). Without this trust, scientific findings are likely at risk of being marginalized, which may even lead to a proliferation of conspiracy theories.…”
Section: Introduction Introductionmentioning
confidence: 97%
“…After all, even if we could classify all fabricated data correctly and falsely regard genuine data as fabricated in 5% of the cases, then with a prevalence of 2% (Fanelli, 2009) the positive predictive value would only be 29%. This is a best-case scenario (see also Stricker & Günther, 2019) that would cause approximately 1 out of 3 cases of 'detected data fabrication' to be false. Hence, we do not recommend attempting to detect data fabrication on statistical methods alone.…”
Section: Discussionmentioning
confidence: 99%