Abstract:The "New Statistics" emphasizes effect sizes, confidence intervals, meta-analysis, and the use of Open Science practices. We present three specific ways in which a New Statistics approach can help improve scientific practice: by reducing overconfidence in small samples, by reducing confirmation bias, and by fostering more cautious judgments of consistency. We illustrate these points through consideration of the literature on oxytocin and human trust, a research area that typifies some of the endemic problems t… Show more
“…And if all the values within the confidence interval are biologically unimportant, then a statement that your results indicate no important effect can also be made [11]. (This is an example of where focusing on effect size and uncertainty also allows clear yes/no interpretations if desired; see also [31]. )…”
Section: Effect Size and Confidence Interval: How Much And How Accurate?mentioning
The p-value has long been the figurehead of statistical analysis in biology, but its position is under threat. p is now widely recognized as providing quite limited information about our data, and as being easily misinterpreted. Many biologists are aware of p's frailties, but less clear about how they might change the way they analyse their data in response. This article highlights and summarizes four broad statistical approaches that augment or replace the p-value, and that are relatively straightforward to apply. First, you can augment your p-value with information about how confident you are in it, how likely it is that you will get a similar p-value in a replicate study, or the probability that a statistically significant finding is in fact a false positive. Second, you can enhance the information provided by frequentist statistics with a focus on effect sizes and a quantified confidence that those effect sizes are accurate. Third, you can augment or substitute p-values with the Bayes factor to inform on the relative levels of evidence for the null and alternative hypotheses; this approach is particularly appropriate for studies where you wish to keep collecting data until clear evidence for or against your hypothesis has accrued. Finally, specifically where you are using multiple variables to predict an outcome through model building, Akaike information criteria can take the place of the p-value, providing quantified information on what model is best. Hopefully, this quick-and-easy guide to some simple yet powerful statistical options will support biologists in adopting new approaches where they feel that the p-value alone is not doing their data justice.
“…And if all the values within the confidence interval are biologically unimportant, then a statement that your results indicate no important effect can also be made [11]. (This is an example of where focusing on effect size and uncertainty also allows clear yes/no interpretations if desired; see also [31]. )…”
Section: Effect Size and Confidence Interval: How Much And How Accurate?mentioning
The p-value has long been the figurehead of statistical analysis in biology, but its position is under threat. p is now widely recognized as providing quite limited information about our data, and as being easily misinterpreted. Many biologists are aware of p's frailties, but less clear about how they might change the way they analyse their data in response. This article highlights and summarizes four broad statistical approaches that augment or replace the p-value, and that are relatively straightforward to apply. First, you can augment your p-value with information about how confident you are in it, how likely it is that you will get a similar p-value in a replicate study, or the probability that a statistically significant finding is in fact a false positive. Second, you can enhance the information provided by frequentist statistics with a focus on effect sizes and a quantified confidence that those effect sizes are accurate. Third, you can augment or substitute p-values with the Bayes factor to inform on the relative levels of evidence for the null and alternative hypotheses; this approach is particularly appropriate for studies where you wish to keep collecting data until clear evidence for or against your hypothesis has accrued. Finally, specifically where you are using multiple variables to predict an outcome through model building, Akaike information criteria can take the place of the p-value, providing quantified information on what model is best. Hopefully, this quick-and-easy guide to some simple yet powerful statistical options will support biologists in adopting new approaches where they feel that the p-value alone is not doing their data justice.
“…The cause of the replication crisis is multifaceted, and inadequate reporting practices are just a single factor among many contributing to the failure of selfcorrection in psychological science. A growing number of scholars are also raising concerns that a key theme in this crisis is an overreliance on the null hypothesis significance testing (NHST) approach when conducting research and interpreting results (e.g, Calin-Jageman & Cumming, 2019;Cumming, 2014;Peters & Crutzen, 2017). That is, researchers have traditionally prioritized all-or-none decisions (i.e., a finding is either statistically significant or non-significant) to the exclusion of information that describes the magnitude and precision of a finding, or whether that finding is likely to replicate.…”
Section: Criteria Ii: Appropriateness Of Statistical Inferencesmentioning
confidence: 99%
“…One of the major criticisms of this approach is that it simply does not provide researchers with the full information they need to describe the relationship between an independent and dependent variable (Calin-Jageman & Cumming, 2019;Cumming, 2014;Cohen, 1990). NHST and p values only provide evidence of whether an effect is statistically significant, and of the direction of an effect.…”
Section: Criteria Ii: Appropriateness Of Statistical Inferencesmentioning
confidence: 99%
“…Scholars also cite concerns that NHST and its associated p values are too often misconstrued or misused by its practitioners, thereby leading to claims that are not substantiated by the data (e.g., Gelman & Stern, 2006;Nickerson, 2000;Nieuwenhuis, Forstmann, & Wagenmakers, 2011;McShane, Gal, Gelman, Robert, & Tackett, 2019). As an alternative (or adjunct) to NHST, proponents of what has been called parameter estimation (Kelley & Rausch, 2006;Maxwell, Kelley, & Rausch, 2008;Woodson, 1969) or the New Statistics (Calin-Jageman & Cumming, 2019;Cumming, 2014) have argued that inference should focus on: (1) the magnitude of a finding through reporting of effect size, (2) the accuracy and precision of a finding through reporting of confidence intervals on an effect size, and (3) an explicit focus on aggregate evidence through meta-analysis of multiple studies.…”
Section: Criteria Ii: Appropriateness Of Statistical Inferencesmentioning
confidence: 99%
“…Some scholars have advocated completely abandoning NHST and p values in favor of a parameter estimation approach to statistical inference (e.g., Calin-Jageman & Cumming, 2019;Cumming, 2014). We don't go so far.…”
Section: Criteria Ii: Appropriateness Of Statistical Inferencesmentioning
While considerable progress has been made in organizational neuroscience over the past decade, we argue that critical evaluations of published empirical works are not being conducted carefully and consistently. In this ex- tended commentary we take as an example Waldman and colleagues (2017): a major review work that evaluates the state-of-the-art of organizational neuroscience. In what should be an evaluation of the field’s empirical work, the authors uncritically summarize a series of studies that: (1) provide insufficient transparency to be clearly un- derstood, evaluated, or replicated, and/or (2) which misuse inferential tests that lead to misleading conclusions, among other concerns. These concerns have been ignored across multiple major reviews and citing articles. We therefore provide a post-publication review (in two parts) of one-third of all studies evaluated in Waldman and colleague’s major review work. In Part I, we systematically evaluate the field’s two seminal works with respect to their methods, analytic strategy, results, and interpretation of findings. And in Part II, we provide focused reviews of secondary works that each center on a specific concern we suggest should be a point of discussion as the field moves forward. In doing so, we identify a series of practices we recommend will improve the state of the literature. This includes: (1) evaluating the transparency and completeness of an empirical article before accepting its claims, (2) becoming familiar with common misuses or misconceptions of statistical testing, and (3) interpreting results with an explicit reference to effect size magnitude, precision, and accuracy, among other recommendations. We suggest that adopting these practices will motivate the development of a more replicable, reliable, and trustworthy field of organizational neuroscience moving forward.
Objective
A significant number of epileptic patients fail to respond to available anticonvulsive medications. To find new anticonvulsive medications, we evaluated FDA‐approved drugs not known to be anticonvulsants. Using zebrafish larvae as an initial model system, we found that the opioid antagonist naltrexone exhibited an anticonvulsant effect. We validated this effect in three other epilepsy models and present naltrexone as a promising anticonvulsive candidate.
Methods
Candidate anticonvulsant drugs, determined by our prior transcriptomics analysis of hippocampal tissue, were evaluated in a larval zebrafish model of human Dravet syndrome (scn1Lab mutants), in wild‐type zebrafish larvae treated with the pro‐convulsant drug pentylenetetrazole (PTZ), in wild‐type C57bl/6J acute brain slices exposed to PTZ, and in wild‐type mice treated with PTZ in vivo. Abnormal locomotion was determined behaviorally in zebrafish and mice and by field potential in neocortex layer IV/V and CA1 stratum pyramidale in the hippocampus.
Results
The opioid antagonist naltrexone decreased abnormal locomotion in the larval zebrafish model of human Dravet syndrome (scn1Lab mutants) and wild‐type larvae treated with the pro‐convulsant drug PTZ. Naltrexone also decreased seizure‐like events in acute brain slices of wild‐type mice, and the duration and number of seizures in adult mice injected with PTZ.
Significance
Our data reveal that naltrexone has anticonvulsive properties and is a candidate drug for seizure treatment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.