2020
DOI: 10.3389/fpsyg.2019.02893
|View full text |Cite
|
Sign up to set email alerts
|

Enhancing Statistical Inference in Psychological Research via Prospective and Retrospective Design Analysis

Abstract: In the past two decades, psychological science has experienced an unprecedented replicability crisis which uncovered several problematic issues. Among others, the use and misuse of statistical inference plays a key role in this crisis. Indeed, statistical inference is too often viewed as an isolated procedure limited to the analysis of data that have already been collected. Instead, statistical reasoning is necessary both at the planning stage and when interpreting the results of a research project. Based on t… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
55
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
6
2
1

Relationship

3
6

Authors

Journals

citations
Cited by 25 publications
(57 citation statements)
references
References 47 publications
0
55
0
Order By: Relevance
“…The latter not only makes it difficult to distinguish true results from false positive results, but also inflates the risk of overestimating effect sizes. This risk can be defined a priori as the "exaggeration ratio" that indicates how much an effect size will be overestimated on the average in comparison with a plausible true effect size given that statistical significance is reached (e.g., Gelman & Carlin, 2014; see also Altoè et al, 2020). Unfortunately, researchers who test treatment efficacy in learning disorders frequently encounter this problem.…”
Section: Introductionmentioning
confidence: 99%
“…The latter not only makes it difficult to distinguish true results from false positive results, but also inflates the risk of overestimating effect sizes. This risk can be defined a priori as the "exaggeration ratio" that indicates how much an effect size will be overestimated on the average in comparison with a plausible true effect size given that statistical significance is reached (e.g., Gelman & Carlin, 2014; see also Altoè et al, 2020). Unfortunately, researchers who test treatment efficacy in learning disorders frequently encounter this problem.…”
Section: Introductionmentioning
confidence: 99%
“…This might not accurately reflect researchers' expectations which can instead be better captured by an interval of values with an associated probability distribution. This suggestion is currently under development in the work of Altoè et al (2019). Finally, in the current work, we chose Cohen's d as an effect size measure to illustrate a design analysis because it is an effect size measure widely used in psychology.…”
Section: Discussionmentioning
confidence: 99%
“…Power, Type M error and Type S error as a function of sample size in an independent samples t-test, assuming a plausible effect size equal to a Cohen's d of 0.35 and  of 0.05.Reprinted from 'Enhancing statistical inference in psychological research via prospective and retrospective design analysis.' by G Altoè et. al, 2019..…”
mentioning
confidence: 98%
“…The course lasts 42 h (11 weeks, 21 lessons, with two weekly sessions lasting 90 min each). Since we could not plan the number of participants in advance, we ran a retrospective power analysis via simulation on the number of participants collected to test the reliability of our results ( Altoè et al, 2020 ). Hypothesizing a plausible correlation of 0.30 between exam results and the use of quizzes and out-of-class activities ( Castillo-Manzano et al, 2016 ; Hunsu et al, 2016 ; Sung et al, 2016 ; Lei et al, 2018 ), 294 participants yielded a power of 0.84, with a plausible magnitude error of 0.72 and no sign error.…”
Section: Methodsmentioning
confidence: 99%