We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p= .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of .05, .01, .005, or anything else, is not acceptable.
Objective: To determine the concentration of stool short-chain fatty acids (SCFAs) in critically ill patients with sepsis and to compare the results between the critically ill patient and the control group. Methods: This descriptive, multicenter, observational study was conducted in five health institutions. Over a 6-month study period, critically ill patients with sepsis who were admitted to the intensive care unit (ICU) and met the inclusion criteria were enrolled, and a control, paired by age and sex, was recruited for each patient. A spontaneous stool sample was collected from each participant and a gas chromatograph coupled to a mass spectrometer (Agilent 7890/MSD 5975 C) was used to measure the concentrations SCFAs. Results: The final sample included 44 patients and 45 controls. There were no differences in the age and sex distributions between the groups (p > 0.05). According to body mass index (BMI), undernutrition was more prevalent among critically ill patients, and BMI in control subjects was most frequently classified as overweight (p ¼ 0.024). Propionic acid, acetic acid, butyric acid, and isobutyric acid concentrations were significantly lower in the critically ill patient group than in the control group (p ¼ 0.000). No association with outcome variables (complications, ICU stay, and discharge condition) was found in the patients, and patients diagnosed with infection on ICU admission showed significant decreases in butyric and isobutyric acid concentrations with respect to other diagnostic criteria (p < 0.05). Conclusions:The results confirm significantly lower concentrations of stool SCFAs in critically ill patients with sepsis than in control subjects. Due to its role in intestinal integrity, barrier function, and anti-inflammatory effect, maintaining the concentration of SCFAs may be important in the ICU care protocols of the critical patient.
We argue that depending on p-values to reject null hypotheses, including a recent call for changing the canonical alpha level for statistical significance from .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable criterion levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and determining sample sizes much more directly than significance testing does; but none of the statistical tools should replace significance testing as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, or implications for applications. To boil all this down to a binary decision based on a p-value threshold of .05, .01, .005, or anything else, is not acceptable.
Elicitation methods aim to build participants' distributions about a parameter of interest. In most elicitation studies this parameter is rarely known in advance and hinders an objective comparison between elicitation methods. In two experiments, participants were first presented with a fixed random sequence of images and numbers and subsequently their subjective distributions of percentages of one of those numbers was elicited. Importantly, the true percentage was set in advance. The first experiment tested whether receiving instructions as to the elicitation method would assist in estimating a true value more accurately than receiving no instructions and whether accuracy was determined by the numerical skills of the participants. The second experiment sought to compare the elicitation method used in the first experiment with a variation of a graphical elicitation method. The results indicate that (i) receiving instructions as to the elicitation method does assist in producing estimates closer to a true percentage value, (ii) the level of numerical skills does not play a part in the accuracy of the estimation (Experiment 1), and (iii) although the average estimates of the betting and graphical method are not significantly different, the betting method leads to more precise estimations than the graphical method (Experiment 2). Both studies featured statistical procedures (functional data analysis and a novel clustering technique) not considered in past research on the elicitation of subjective distributions. The implications of these results are discussed in relation to a recent key study.
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p= .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of .05, .01, .005, or anything else, is not acceptable.
<p>Birnbaum Saunders (1969b) used a probability distribution to explain the lifetime data and stress produced in materials. Based on this distribution, we propose a generalization of the Birnbaum-Saunders distribution, referred to as the proportional hazard Birnbaum-Saunders distribution, which includes a new parameter that provides more flexibility in terms of skewness and kurtosis than existing models. We derive the main properties of the model. We discuss maximum likelihood estimation of the model parameters. As a natural step, we define the log-linear proportional hazard Birnbaum-Saunders regression model. An empirical application to a real data set is presented in order to illustrate the usefulness of the proposed model. The results showed that the proportional hazard Birnbaum-Saunders model can be used quite effectively in analyzing survival data, reliability problems and fatigue life studies.</p>
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.