2015
DOI: 10.1515/spp-2015-0001
|View full text |Cite
|
Sign up to set email alerts
|

Substantive Importance and the Veil of Statistical Significance

Abstract: Political science is gradually moving away from an exclusive focus on statistical significance and toward an emphasis on the magnitude and importance of effects. While we welcome this change, we argue that the current practice of “magnitude-and-significance,” in which researchers only interpret the magnitude of a statistically significant point estimate, barely improves the much-maligned “sign-and-significance” approach, in which researchers focus only on the statistical significance of an estimate. This exclu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
6
1

Year Published

2019
2019
2022
2022

Publication Types

Select...
7

Relationship

1
6

Authors

Journals

citations
Cited by 13 publications
(7 citation statements)
references
References 15 publications
0
6
1
Order By: Relevance
“…This is statistically different from the influence of business interests at a 0.1 level. As sample sizes as large as the one in this study deflate p-values (McCaskey and Rainey, 2015), this level of significance is weak evidence for an actual difference in influence. We, therefore, assess the substantive meaning of the range of predicted effects that is included in 90% confidence intervals (as suggested by (McCaskey and Rainey, 2015)).…”
Section: Resultscontrasting
confidence: 57%
“…This is statistically different from the influence of business interests at a 0.1 level. As sample sizes as large as the one in this study deflate p-values (McCaskey and Rainey, 2015), this level of significance is weak evidence for an actual difference in influence. We, therefore, assess the substantive meaning of the range of predicted effects that is included in 90% confidence intervals (as suggested by (McCaskey and Rainey, 2015)).…”
Section: Resultscontrasting
confidence: 57%
“…In so doing, we pay attention to our 95% confidence intervals. This allows us to discuss what effect sizes we are able to rule out; an approach intuitively similar to the equivalence testing approach suggested by Hartman and Hidalgo (2018) and others in the literature on null effects and statistical/substantive significance (Gross 2015; McCaskey and Rainey 2015; Rainey 2014). In our models below, we use Hartman and Hidalgo’s default values for equivalence testing (36% of a standard deviation) and test whether our effects are distinct from that benchmark.…”
Section: Methodsmentioning
confidence: 99%
“…Therefore, we follow their suggestion to accept uncertainty and to report the original two‐tailed p ‐values in separate columns to avoid the use of arbitrary statistical thresholds. Note that, statistically, p ‐values cannot play the role of t ‐statistics because they can only tell us ‘the maximum probability of obtaining hypothetical data at least as extreme as the observed data if the null hypothesis were true’ (McCaskey and Rainey ).…”
Section: Empirical Analyses and Resultsmentioning
confidence: 99%