2006
DOI: 10.1016/j.csda.2004.07.010
|View full text |Cite
|
Sign up to set email alerts
|

Principal component analysis of binary data by iterated singular value decomposition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
66
0

Year Published

2011
2011
2024
2024

Publication Types

Select...
5
4
1

Relationship

0
10

Authors

Journals

citations
Cited by 86 publications
(67 citation statements)
references
References 20 publications
1
66
0
Order By: Relevance
“…Although its linear convergence rate is necessarily slower than the golden standard of the Newton method, it has many applications in large scale optimization and is useful when high precision is not needed. Besides the MDS applications reviewed in this paper, majorization has been used in many other statistical techniques, such as, for example, support vector machines (Groenen, Nalbantov, Bioch, and C 2008) and logistic regression (De Leeuw 2006). It is our expectation that a decade from now majorization will be included in many nonlinear optimization textbooks.…”
Section: Discussionmentioning
confidence: 99%
“…Although its linear convergence rate is necessarily slower than the golden standard of the Newton method, it has many applications in large scale optimization and is useful when high precision is not needed. Besides the MDS applications reviewed in this paper, majorization has been used in many other statistical techniques, such as, for example, support vector machines (Groenen, Nalbantov, Bioch, and C 2008) and logistic regression (De Leeuw 2006). It is our expectation that a decade from now majorization will be included in many nonlinear optimization textbooks.…”
Section: Discussionmentioning
confidence: 99%
“…and the equality holds when x ¼ y [36,37]. This equation provides quadratic upper bounds for the first term of (3.5) at the tangent point y.…”
Section: Optimization Algorithmmentioning
confidence: 99%
“…where U and V are the orthogonal matrix, UAR m  m , VAR n  n , D is the diagonal matrix, D=[diag (s 1 , s 2 , y, s q ), O] or its transposition, and this is decided by m on or m Zn, DAR m  n , O is the zero matrix, q=min(m, n), s 1 Zs 2 ZyZ s q 40. s i (i=1, 2, y, q) are called the singular values of matrix A. SVD method has been widely applied to many fields in recent years, such as data compression [1,2], system recognition [3], adaptive filter [4,5], principal component analysis (PCA) [6,7], noise reduction [8][9][10], faint signal extraction [11,12], machine condition monitoring [13] and so on. For example, Ahmed et al utilize SVD to compress the electrocardiogram (ECG) signal, their main idea is to transform the ECG signals to a rectangular matrix, compute its SVD, then discard the signals represented by the small singular values and only those signals represented by some big singular values are reserved so that ECG signal can be greatly compressed [1].…”
Section: Introductionmentioning
confidence: 99%