2000
DOI: 10.1109/97.841157
|View full text |Cite
|
Sign up to set email alerts
|

Orthogonal Oja algorithm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
25
0

Year Published

2002
2002
2018
2018

Publication Types

Select...
7
1

Relationship

1
7

Authors

Journals

citations
Cited by 66 publications
(26 citation statements)
references
References 8 publications
0
25
0
Order By: Relevance
“…At first, we consider the simplest (and hence cheapest) algorithm for the least eigenvector extraction, namely the OOja [3] which is a gradient type method of linear complexity. The estimate of the least…”
Section: Mns-oojamentioning
confidence: 99%
See 1 more Smart Citation
“…At first, we consider the simplest (and hence cheapest) algorithm for the least eigenvector extraction, namely the OOja [3] which is a gradient type method of linear complexity. The estimate of the least…”
Section: Mns-oojamentioning
confidence: 99%
“…This problem is known for many decades and several solutions exist in the literature including the Oja (gradient) like techniques of linear complexity [3,4,5] (i.e, of complexity O(nm) flops per iteration where n is the size of the observation vector and m is the rank of the desired minor subspace) and the power like techniques (YAST, PAST) of quadratic complexity O(n 2 ) [6,7].…”
Section: Introductionmentioning
confidence: 99%
“…The OOjaH algorithm proposes to use as H(i) the inverse square root of the matrix I + β 2 p 2 y(i)y H (i) [6]. However, such an approach diverges slowly from orthonormality in the case of noise subspace tracking.…”
Section: Fast Orthogonal Ojamentioning
confidence: 98%
“…Recently, an orthogonalization step of the weight matrix has been introduced to the Oja algorithm in order to have more stability. The new algorithm is called OOjaH [6] (H stands for Householder) and is summarized in Table 1. …”
Section: Orthogonal Ojamentioning
confidence: 99%
“…Recently it was shown in [1] that an orthonormal basis of the column space of W (i + 1) can be obtained by reflecting the (orthonormal) columns of W (i) with respect to a hyperplane specified by a certain Householder vector h(i), ||h(i)|| 2 = 1. The iteration has the form…”
Section: Introductionmentioning
confidence: 99%