We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p = 0.05 to p = 0.005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of 0.05, 0.01, 0.005, or anything else, is not acceptable.
Analyzing RT distributions in the Simon task reveals that congruency effects decrease for the longest RTs. Four experiments were carried out to examine whether this decrease of the congruency effect with response speed was under a top-down control or due to bottom-up mechanisms. We specifically manipulated the availability of attentional resources by requiring participants to perform a Simon task concurrently to different secondary tasks. RT distribution analysis (in particular delta functions) was performed under both single-task and dual-task conditions. Results show that the reduction of the interference effect with time could be affected when the Simon task was performed concurrently with a secondary task. Nonetheless, the type of the secondary task seems to be a critical factor. Therefore, the data suggest that the mechanisms responsible for the reduction of the interference effect with time are under some attentional control but the exact nature of these mechanisms remains to be explored.
We argue that making accept/reject decisions on scientific hypotheses, including a recent call for changing the canonical alpha level from p= .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable alpha levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and sample size much more directly than significance testing does; but none of the statistical tools should be taken as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, and implications for applications. To boil all this down to a binary decision based on a p-value threshold of .05, .01, .005, or anything else, is not acceptable.
We compared the performance of 15 adults with attention deficit hyperactivity disorder (ADHD) and a group of 16 control adults on a temporal bisection task in auditory and visual modalities. The point of subjective equality (PSE) and the difference limen (DL) were computed to analyse performance. The main findings were that (a) individuals with ADHD overestimated the duration of both the auditory and visual stimuli in comparison to the control group, as evidenced by a shift in their mean PSE; (b) individuals with ADHD also showed less precision in their estimates than did the control group as evidenced by flatter psychometric functions; and (c) the degrees of overestimation and imprecision in subjects with ADHD were comparable across modalities. These results, discussed in the framework of the pacemaker-counter clock model of time estimation, suggest that temporal difficulties encountered by ADHD patients might be explained both by an alertness effect at the level of the switch that directs pulses into the accumulator and also by distortions of durations stored in reference memory.
We argue that depending on p-values to reject null hypotheses, including a recent call for changing the canonical alpha level for statistical significance from .05 to .005, is deleterious for the finding of new discoveries and the progress of science. Given that blanket and variable criterion levels both are problematic, it is sensible to dispense with significance testing altogether. There are alternatives that address study design and determining sample sizes much more directly than significance testing does; but none of the statistical tools should replace significance testing as the new magic method giving clear-cut mechanical answers. Inference should not be based on single studies at all, but on cumulative evidence from multiple independent studies. When evaluating the strength of the evidence, we should consider, for example, auxiliary assumptions, the strength of the experimental design, or implications for applications. To boil all this down to a binary decision based on a p-value threshold of .05, .01, .005, or anything else, is not acceptable.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.