2017
DOI: 10.7287/peerj.preprints.3411v1
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Manipulating the alpha level cannot cure significance testing – comments on "Redefine statistical significance"

Abstract: We argue that depending on p-values to reject null hypotheses, including a recent call for changing the canonical alpha level for statistical significance from .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable criterion levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and determining sample sizes much more directly than significance testing does… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
10
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
3
1

Relationship

3
1

Authors

Journals

citations
Cited by 4 publications
(10 citation statements)
references
References 23 publications
0
10
0
Order By: Relevance
“…Statistics is often more misleading than useful (17,18). Comparing the randomly generated results with the empirically obtained frequency distribution produces probability values of rejecting the null hypothesis close to absolute 0.…”
Section: Methodsmentioning
confidence: 97%
“…Statistics is often more misleading than useful (17,18). Comparing the randomly generated results with the empirically obtained frequency distribution produces probability values of rejecting the null hypothesis close to absolute 0.…”
Section: Methodsmentioning
confidence: 97%
“…“I find it especially troubling”—she continues—“to spoze an error statistician…ought to use a Bayes Factor as the future gold standard for measuring his error statistical tool…even though Bayes Factors don’t control or measure error probabilities” (2017b). Furthermore, Mayo pinpoints the (old) fallacy of transposing the conditional, whereby the (error) probability of a test is confused with the (posterior) probability of a belief (also Trafimow et al , 2017 ). And despite “60 years (sic) old…demonstrations [showing] that with reasonable tests and reasonable prior probabilities, the disparity vanishes…they still mean different things” (2017c).…”
Section: Counter-argumentsmentioning
confidence: 99%
“…Lakens et al (2017) add lack of experimental redundancy, logical traps, research opacity, and poor accounting of sources of error, as well as the risks of reduced generalisability and research breadth were Benjamin et al ’s proposal to succeed. Methodological concerns were also raised by Amrhein & Greenland (2017); Black (2017); Byrd (2017); Chapman (2017); Crane (2017); Ferreira & Henderson (2017); Greenland (2017); Hamlin (2017); Kong (2017); Lew (2017); Llewelyn (2017); Martin (2017); McShane et al (2017); Passin (2017); Steltenpohl (2017); Trafimow et al (2017); Young (2017); Zollman (2017); and Morey (2017). Some researchers even propose the use of preregistration as a way of minimizing above problems ( Hamlin, 2017; Llewelyn, 2017; van der Zee, 2017)…”
Section: Counter-argumentsmentioning
confidence: 99%
“…Addressing replication directly, Chapman (2017) and Trafimow et al (2017) point out that the problem with replication is not too many false positives but insufficient power. Krueger (2017; also McShane et al , 2017, Trafimow et al , 2017) chides Benjamin et al for the incoherence of considering replication as order-dependent and inverting the exploratory-confirmatory nature of replication by proposing to make the former more difficult to achieve and the latter more liberal.…”
Section: Counter-argumentsmentioning
confidence: 99%
See 1 more Smart Citation