2002
DOI: 10.1109/72.977271
|View full text |Cite
|
Sign up to set email alerts
|

μARTMAP: use of mutual information for category reduction in Fuzzy ARTMAP

Abstract: A new architecture called muARTMAP is proposed to impact a category proliferation problem present in Fuzzy ARTMAP. Under a probabilistic setting, it seeks a partition of the input space that optimizes the mutual information with the output space, but allowing some training error, thus avoiding overfitting. It implements an inter-ART reset mechanism that permits handling exceptions correctly, thus using few categories, especially in high dimensionality problems. It compares favorably to Fuzzy ARTMAP and Boosted… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
24
0

Year Published

2005
2005
2017
2017

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 57 publications
(24 citation statements)
references
References 20 publications
0
24
0
Order By: Relevance
“…Data sets T3-T7 were used in [20] as benchmarks for FasArt, FAM, and DAM. Chess is another benchmark data set frequently used in the literature [25], [31]. The number of patterns for each prediction is proportional to its area for data sets CIS, T5, and T6 (Table II), while predictions in the data sets chess, T3, and T4 have the same number of patterns.…”
Section: A the 2-d Data Setsmentioning
confidence: 99%
“…Data sets T3-T7 were used in [20] as benchmarks for FasArt, FAM, and DAM. Chess is another benchmark data set frequently used in the literature [25], [31]. The number of patterns for each prediction is proportional to its area for data sets CIS, T5, and T6 (Table II), while predictions in the data sets chess, T3, and T4 have the same number of patterns.…”
Section: A the 2-d Data Setsmentioning
confidence: 99%
“…A number of authors have tried to address the category proliferation/over-training problem in Fuzzy ARTMAP. Amongst them we refer to the work by Mariott and Harrisson (Marriott & Harrison, 1995), where the authors eliminate the match tracking mechanism of Fuzzy ARTMAP when dealing with noisy data, the work by Charlampidis, et al, (Charalampidis, Kasparis, & Georgiopoulos, 2001), where the Fuzzy ARTMAP equations are appropriately modified to compensate for noisy data, the work by Verzi, et al, (Verzi, Heileman, Georgiopoulos, & Healy, 2001), Anagnostopoulos, et al, (Anagnostopoulos, Bharadwaj, Georgiopoulos, Verzi, & Heileman, 2003), and Gomez-Sanchez, et al, (Gomez-Sanchez, Dimitriadis, Cano-Izquierdo, & Lopez-Coronado, 2002), where different ways are introduced of allowing the Fuzzy ARTMAP categories to encode patterns that are not necessarily mapped to the same label, provided that the percentage of patterns corresponding to the majority label exceeds a certain threshold, the work by Koufakou, et al, (Koufakou, Georgiopoulos, Anagnostopoulos, & Kasparis, 2001), where cross-validation is employed to avoid the overtraining/category proliferation problem in Fuzzy ARTMAP, and the work by Carpenter (Carpenter & B. L. Milenova, 1998), Williamson (Williamson, 1997), Parrado-Hernandez, et al, (Parrado-Hernandez, Gomez-Sanchez, & Dimitriadis, 2003), where the ART structure is changed from a winner-take-all to a distributed version and simultaneously slow learning is employed with the intent of creating fewer ART categories and reducing the detrimental effects of noisy patterns.…”
Section: Introductionmentioning
confidence: 99%
“…The parameter h max controls the impurity of each node (category) defined in (4), according to (3). A node may be both very large and very pure (which means most of the patterns that select it have the same class label).…”
Section: B Parameter H Maxmentioning
confidence: 99%
“…In this paper we focus our attention on one Fuzzy ARTMAP modification, called Safe µARTMAP [3,4], which addresses this category proliferation problem. We analyze each parameter of Safe µARTMAP and provide representative values or the reasonable range for each parameter (see Chapter III).…”
Section: Introductionmentioning
confidence: 99%