2017
DOI: 10.3758/s13428-017-0918-2
|View full text |Cite
|
Sign up to set email alerts
|

Effect of variance ratio on ANOVA robustness: Might 1.5 be the limit?

Abstract: Inconsistencies in the research findings on F-test robustness to variance heterogeneity could be related to the lack of a standard criterion to assess robustness or to the different measures used to quantify heterogeneity. In the present paper we use Monte Carlo simulation to systematically examine the Type I error rate of F-test under heterogeneity. One-way, balanced, and unbalanced designs with monotonic patterns of variance were considered. Variance ratio (VR) was used as a measure of heterogeneity (1.5, 1.… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

3
79
0
1

Year Published

2019
2019
2024
2024

Publication Types

Select...
7
1
1

Relationship

0
9

Authors

Journals

citations
Cited by 145 publications
(100 citation statements)
references
References 69 publications
3
79
0
1
Order By: Relevance
“…Moreover, sample size was always equal. Extreme heterogeneity is more problematic in case of unequal group sizes especially when the smallest group exhibits the largest variance [38]. In such cases, a variance-stabilizing transformation such as log-transformation of the response variables is advisable.…”
Section: Discussionmentioning
confidence: 99%
“…Moreover, sample size was always equal. Extreme heterogeneity is more problematic in case of unequal group sizes especially when the smallest group exhibits the largest variance [38]. In such cases, a variance-stabilizing transformation such as log-transformation of the response variables is advisable.…”
Section: Discussionmentioning
confidence: 99%
“…In cases where the data were not normally distributed, we additionally tested our hypotheses with Mann-Whitney U-tests as a nonparametric test and reported the median as a measure of central tendency. Since simulation studies have shown that ANOVA is robust to violations of the normal distribution assumption (Schmider et al, 2010;Blanca et al, 2017), we also report contrast analyses to avoid multiple comparisons with less power (Furr, 2008) to test our specific hypotheses.…”
Section: Discussionmentioning
confidence: 99%
“…The general purpose of text classification is to automatically assign text documents to one or more predefined categories. Frequently used examples are the support vector machine (SVM), naïve Bayes, k-nearest neighbor (KNN), or boosting trees (Bishop 2006;Kowsari et al 2019). Furthermore, recent studies increasingly focus on neural networks that consist of multiple, hierarchically organized processing layers, which is often referred to as deep learning (DL) (LeCun et al 2015).…”
Section: Text Classification and Regressionmentioning
confidence: 99%
“…Such algorithms can support matching problems by extracting regularities between a target variable with predefined categories and high-dimensional input data, which is typically the case with natural language data (Aggarwal and Zhai 2012). The advantage is that the model building happens automatically by iteratively learning from labelled observations, which allows the TbIAS to detect complex patterns and relationships without being explicitly programmed (Bishop 2006). As generally a broad range of text classifiers is available with different learning capabilities (cf.…”
Section: Df1mentioning
confidence: 99%