1994
DOI: 10.1007/978-3-642-51175-2_27
|View full text |Cite
|
Sign up to set email alerts
|

Clusters and factors: neural algorithms for a novel representation of huge and highly multidimensional data sets

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

1997
1997
2019
2019

Publication Types

Select...
5
2

Relationship

4
3

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 5 publications
0
7
0
Order By: Relevance
“…Principal-component analysis and correspondence analysis were used to visualize the relationships between the isolates. Classification of the isolates into homogeneous metabolic groups was achieved by use of the KMACL4 software, a new unsupervised classification method for building overlapping clusters of bacteria and determining the main characteristics of these classes (2,27). This software was designed for marrying the benefits coming out of both cluster and factor representations.…”
Section: Methodsmentioning
confidence: 99%
“…Principal-component analysis and correspondence analysis were used to visualize the relationships between the isolates. Classification of the isolates into homogeneous metabolic groups was achieved by use of the KMACL4 software, a new unsupervised classification method for building overlapping clusters of bacteria and determining the main characteristics of these classes (2,27). This software was designed for marrying the benefits coming out of both cluster and factor representations.…”
Section: Methodsmentioning
confidence: 99%
“…The problem of optimisation is NP-hard, they can only be made to converge towards a local optimum which depends on their initialisation (for example the initial positions of centres randomly generated or picked out from the data) or on the order of the data. This disqualified them because we set the proviso that the results should be independent from the initial conditions -our method of axial K-means (Lelu, 1994) is part of this family; the local optima for a given number of clusters mostly reveal the same main clusters which are often trivial but can also make the most interesting ones of average or low size appear/disappear/amalgamate or split. Quite a lot of incremental variants of these methods have been proposed (Binztock and Gallinari, 2002;Chen et al, 2003) and a partial review of these can be found in Lin et al (2004).…”
Section: Adapting Methods With Mobile Centres To Incrementalitymentioning
confidence: 99%
“…To measure the links, we chose to use the cosine in the distributional space (Lelu, 2003), a measure linked to the Hellinger distance (Domengès and Volle, 1979) Worthwhile characteristics follow, particularly:…”
Section: Knns Graphmentioning
confidence: 99%
“…The method adopted here is an axial k‐means technique (AKM or KMA), developed by one of us and implemented in the software Neuronav (Diatopie). It is a variant of k‐means partitioning methods (Lloyd, 1982, building on an Bells's lab unpublished paper by the same author, 1957; McQueen, 1967), which brings significant improvements (Lelu, 1994, 2008), giving rise to concept‐vectors and document orthogonal and oblique projections on each cluster axod. It is related to the spherical k‐means family (Domengès & Volle, 1979), with a robust theoretical basis.…”
Section: Methodsmentioning
confidence: 99%