No abstract
No abstract
Misinterpretation and abuse of statistical tests, confidence intervals, and statistical power have been decried for decades, yet remain rampant. A key problem is that there are no interpretations of these concepts that are at once simple, intuitive, correct, and foolproof. Instead, correct use and interpretation of these statistics requires an attention to detail which seems to tax the patience of working scientists. This high cognitive demand has led to an epidemic of shortcut definitions and interpretations that are simply wrong, sometimes disastrously so—and yet these misinterpretations dominate much of the scientific literature. In light of this problem, we provide definitions and a discussion of basic statistics that are more general and critical than typically found in traditional introductory expositions. Our goal is to provide a resource for instructors, researchers, and consumers of statistics whose knowledge of statistical theory and technique may be limited but who wish to avoid and spot misinterpretations. We emphasize how violation of often unstated analysis protocols (such as selecting analyses for presentation based on the P values they produce) can lead to small P values even if the declared test hypothesis is correct, and can lead to large P values even if that hypothesis is incorrect. We then provide an explanatory list of 25 misinterpretations of P values, confidence intervals, and power. We conclude with guidelines for improving statistical interpretation and reporting.
No abstract
You have just finished running an experiment. You analyze the results, and you find a significant effect. Success! But wait-how much information does your study really give you? How much should you trust your results? In this article, we show that when researchers use small samples and noisy measurements to study small effects-as they often do in psychology as well as other disciplines-a significant result is often surprisingly likely to be in the wrong direction and to greatly overestimate an effect.In this article, we examine some critical issues related to power analysis and the interpretation of findings arising from studies with small sample sizes. We highlight the use of external information to inform estimates of true effect size and propose what we call a design analysis-a set of statistical calculations about what could happen under hypothetical replications of a study-that focuses on estimates and uncertainties rather than on statistical significance.As a reminder, the power of a statistical test is the probability that it correctly rejects the null hypothesis. For any experimental design, the power of a study depends on sample size, measurement variance, the number of comparisons being performed, and the size of the effects being studied. In general, the larger the effect, the higher the power; thus, power calculations are performed conditionally on some assumption of the size of the effect. Power calculations also depend on other assumptions, most notably the size of measurement error, but these are typically not so difficult to assess with available data.It is of course not at all new to recommend the use of statistical calculations on the basis of prior guesses of effect sizes to inform the design of studies. What is new about the present article is as follows:1. We suggest that design calculations be performed after as well as before data collection and analysis. AbstractStatistical power analysis provides the conventional approach to assess error rates when designing a research study. However, power analysis is flawed in that a narrow emphasis on statistical significance is placed as the primary focus of study design. In noisy, small-sample settings, statistically significant results can often be misleading. To help researchers address this problem in the context of their own studies, we recommend design calculations in which (a) the probability of an estimate being in the wrong direction (Type S [sign] error) and (b) the factor by which the magnitude of an effect might be overestimated (Type M [magnitude] error or exaggeration ratio) are estimated. We illustrate with examples from recent published research and discuss the largest challenge in a design calculation: coming up with reasonable estimates of plausible effect sizes based on external information.
Airway inflammation is an important component of cystic fibrosis (CF) lung disease. To determine whether this begins early in the illness, before the onset of infection, we examined bronchoalveolar lavage (BAL) fluid from 46 newly diagnosed infants with CF under the age of 6 mo identified by a neonatal screening program. These infants were divided into three groups: 10 had not experienced respiratory symptoms or received antibiotics and pathogens were absent in their BAL fluid; 18 had clear evidence of lower respiratory viral or bacterial (> or = 10(5) CFU/ml) infection; and the remaining 18 had either respiratory symptoms, taken antibiotics, or had < 10(5) CFU/ml of respiratory pathogens. Their BAL cytology, interleukin-8, and elastolytic activity were compared with those from 13 control subjects. In a longitudinal study to assess if inflammation develops or persists in the absence of infection, the results of 56 paired annual BAL specimens from 44 CF infants were grouped according to whether they showed absence, development, clearance, or persistence of infection. In newly diagnosed infants with CF, those without infection had BAL profiles comparable with control subjects while those with a lower respiratory infection had evidence of airway inflammation. In older children, the development and persistence of infection was accompanied by increased inflammatory markers, whereas these were decreased in the absence, or with the clearance, of infection. We conclude that airway inflammation follows respiratory infection and, in young children, improves when pathogens are eradicated from the airways.
Surfactant delivery via a narrow-bore tracheal catheter is feasible and potentially effective, and deserves further investigation in clinical trials.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.