2008
DOI: 10.1037/1082-989x.13.2.110
|View full text |Cite
|
Sign up to set email alerts
|

A generally robust approach for testing hypotheses and setting confidence intervals for effect sizes.

Abstract: Standard least squares analysis of variance methods suffer from poor power under arbitrarily small departures from normality and fail to control the probability of a Type I error when standard assumptions are violated. This article describes a framework for robust estimation and testing that uses trimmed means with an approximate degrees of freedom heteroscedastic statistic for independent and correlated groups designs in order to achieve robustness to the biasing effects of nonnormality and variance heterogen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
82
0
4

Year Published

2010
2010
2021
2021

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 103 publications
(88 citation statements)
references
References 77 publications
0
82
0
4
Order By: Relevance
“…Unlike the two previous procedures, the standardizer depends not only on the population variances, but also on the group size allocation ratios. It is noted in Keselman et al (2008) that the particular formulation of Kulinskaya and Staudte (2007) raises a practical problem about its general use as an effect size measure. Specifically, the concern of Keselman et al is about its dependence upon sample sizes.…”
mentioning
confidence: 99%
“…Unlike the two previous procedures, the standardizer depends not only on the population variances, but also on the group size allocation ratios. It is noted in Keselman et al (2008) that the particular formulation of Kulinskaya and Staudte (2007) raises a practical problem about its general use as an effect size measure. Specifically, the concern of Keselman et al is about its dependence upon sample sizes.…”
mentioning
confidence: 99%
“…Second, if the distribution from which the data are sampled is heavy-tailed, the adverse effects of nonnormality can probably be overcome by substituting robust measures of location (e.g., trimmed mean) and scale (e.g., Winsorized covariance matrices) for the usual mean and covariance matrices. According to Keselman, Algina, Lix, Wilcox, and Deering (2008), one argument for the trimmed mean is that it can have a substantial advantage in terms of accuracy of estimation when sampling from heavy-tailed symmetric distributions without altering the hypothesis tested, because it represents the center of the data. In particular, Wilcox (2005) has shown that with modest sample sizes, the 20% trimmed mean performs well in many situations because it is able to handle a high proportion of outliers.…”
Section: Discussion and Recommendationsmentioning
confidence: 99%
“…A number of choices are available: © C I C E d i z i o n i I n t e r n a z i o n a l i (45,46). Option two has been shown to be very effective in withstanding the effects of non-normality (and variance heterogeneity) when testing hypotheses regarding treatment group equality (6,7,8,11,12,13,46). Researchers however, must be comfortable in testing the equality of population trimmed means; many consider this a reasonable approach because the trimmed mean is most representative of the typical score when data are non-normal.…”
Section: © C I C E D I Z I O N I I N T E R N a Z I O N A L Imentioning
confidence: 99%
“…If the results of the preliminary test indicate that the empirical data in each treatment group conforms to a theoretical normal distribution, researchers can go on to test for mean equality with the t-or F-test (assuming that the other assumptions are examined and believed to be true as well). However, if the result of the test for normality indicates the empirical data are not normally distributed within each treatment group, researchers must take remedial action [e.g., See Keselman, Algina, Lix, et al (11,12); Wilcox & Keselman (13); McCullagh & Nelder (14)]. Researchers can attempt to assess whether the data from their experiments conform to the validity requirements associated with classical test statistics (e.g., normality); see for example, Muller and Fetterman (15).…”
Section: Introductionmentioning
confidence: 99%