2004
DOI: 10.1142/s0129065704001863
|View full text |Cite
|
Sign up to set email alerts
|

A Self-Stabilizing Learning Rule for Minor Component Analysis

Abstract: The paper reviews single-neuron learning rules for minor component analysis and suggests a novel minor component learning rule. In this rule, the weight vector length is self-stabilizing, i.e. moving towards unit length in each learning step. In simulations with low-and medium-dimensional data, the performance of the novel learning rule is compared with previously suggested rules.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
8
0

Year Published

2005
2005
2017
2017

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 50 publications
(8 citation statements)
references
References 19 publications
0
8
0
Order By: Relevance
“…A computationally efficient implementation can be achieved by building on results from the artificial neural networks research community. The latter has devoted much effort to solving eigenvector problems related to autocovariance matrices, which has led to a multitude of efficient algorithms to perform this task [4]. For our purposes, we employ the algorithm proposed by Douglas et al in [5].…”
Section: B Tracking Of N Using a Neural Networkmentioning
confidence: 99%
“…A computationally efficient implementation can be achieved by building on results from the artificial neural networks research community. The latter has devoted much effort to solving eigenvector problems related to autocovariance matrices, which has led to a multitude of efficient algorithms to perform this task [4]. For our purposes, we employ the algorithm proposed by Douglas et al in [5].…”
Section: B Tracking Of N Using a Neural Networkmentioning
confidence: 99%
“…Recently, a few self-stabilizing MCA algorithms are proposed in [7,28], how is the performance improvement of our TLS neuron compared with the above-mentioned algorithms? Following the analysis method [5,18], the weight norm of above three algorithms can be obtained as follows:…”
Section: The Tls Linear Neuron With a Self-stabilizing Algorithmmentioning
confidence: 99%
“…Above learning curves indicate that our algorithm is good for larger learning factor or under higher noise environments. In order to compare the performance of the proposed algorithm with other congener self-stabilizing algorithms [7,28], another simulation experiment is also performed. The performance comparison between the proposed algorithm and the algorithms [7,28] is shown in Figs.…”
Section: Computer Simulationsmentioning
confidence: 99%
“…Gaussian processes classification (GPC) represents one of the most important practical Bayesian classification methods [30, 31]. Overviews and principles of Gaussian process and its classification applications can be found in [30] and [31].…”
Section: Related Workmentioning
confidence: 99%