Neural Networks
DOI: 10.1007/3-540-28847-3_7
|View full text |Cite
|
Sign up to set email alerts
|

Self-Organizing Maps and Unsupervised Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 24 publications
(21 citation statements)
references
References 12 publications
0
18
0
Order By: Relevance
“…The referent vectors Rvs are determined from a learning data set L statistically representative of the analyzed data set through an iterative learning process. The referent vectors are first initialized to evenly distributed values of the Ra(k) values in the range of the learning set and then computed by minimizing a nonlinear cost function as in the k-means algorithm [Badran et al, 2005]. Then, for each R i of the learning set presented to the SOM, the Euclidian distance between R i and the referent vectors Rvs are computed and the closest referent vector Rv j is selected.…”
Section: The Classification Methodologymentioning
confidence: 99%
See 1 more Smart Citation
“…The referent vectors Rvs are determined from a learning data set L statistically representative of the analyzed data set through an iterative learning process. The referent vectors are first initialized to evenly distributed values of the Ra(k) values in the range of the learning set and then computed by minimizing a nonlinear cost function as in the k-means algorithm [Badran et al, 2005]. Then, for each R i of the learning set presented to the SOM, the Euclidian distance between R i and the referent vectors Rvs are computed and the closest referent vector Rv j is selected.…”
Section: The Classification Methodologymentioning
confidence: 99%
“…The topological map was trained according to the procedure described in Kohonen [2001]. The number of neurons was determined empirically from solutions of similar problems and then adjusted as described by Badran et al [2005]. The learning data set L is composed of the Ra(k) daily values for the year 2003, a year in which the cloud coverage was the lowest with respect to the 13 years of observations.…”
Section: The Classification Methodologymentioning
confidence: 99%
“…That is, unsupervised clustering is performed independently on each class. There are numerous clustering techniques including the K-means [13], fuzzy C-means [14], hierarchical clustering [15], and self-organizing maps [16]; for a detailed review, the reader is referred to [15]. Any of these clustering techniques can be applied in our approach.…”
Section: B Defining the Sample Weightmentioning
confidence: 99%
“…One way of dealing with imbalanced data sets is to simply assign the same number of clusters to each class. There exist many clustering techniques including the Kmeans [19], fuzzy C-means [20], hierarchical clustering [21], and self-organizing maps [22]; for a detailed review, the reader is referred to [21]. Although any of the aforementioned clustering techniques can be used, a suitable clustering is usually application-dependent and could be guided by the probability distribution of the input data.…”
Section: The New Supervised Learning Approachmentioning
confidence: 99%