“…Dichotomization in conjunction with misleading terminology propagate cognitive biases that seduce researchers to make logically inconsistent and overconfident inferences, both when p is below and when it is above the "significance" threshold. The following errors seem to be particularly widespread: 1 1) use of p-values when there is neither random sampling nor randomization 2) confusion of statistical and practical significance or complete neglect of effect size 3) unwarranted binary statements of there being an effect as opposed to no effect, coming along with -misinterpretations of p-values below 0.05 as posterior probabilities of the null hypothesis -mixing up of estimating and testing and misinterpretation of "significant" results as evidence confirming the coefficients/effect sizes estimated from a single sample treatment of "statistically non-significant" effects as being zero (confirmation of the null) 4) inflation of evidence caused by unconsidered multiple comparisons and p-hacking 5) inflation of effect sizes caused by considering "significant" results only 1 See, for example, McCloskey and Ziliak (1996), Sellke et al (2001), Ioannidis (2005), Ziliak and McCloskey (2008), Krämer (2011), Ioannidis and Doucouliagos (2013), Kline (2013), Colquhoun (2014), Gelman and Loken (2014), Motulsky (2014), Vogt et al (2014), Gigerenzer and Marewski (2015), Greenland et al (2016), Hirschauer et al (2016;2018), Wasserstein and Lazar (2016), Ziliak (2016), Amrhein et al (2017), and Trafimow et al (2018). This list contains but a small selection of the literature on p-value misconceptions from the last 20 years.…”