2013
DOI: 10.1016/j.patcog.2012.07.018
|View full text |Cite
|
Sign up to set email alerts
|

Self-taught dimensionality reduction on the high-dimensional small-sized data

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
50
0
1

Year Published

2013
2013
2020
2020

Publication Types

Select...
4
4
1

Relationship

5
4

Authors

Journals

citations
Cited by 131 publications
(51 citation statements)
references
References 34 publications
0
50
0
1
Order By: Relevance
“…The algorithm STDR [18] employs same regularizers as our JGSC but utilizes robust loss function 2 . Moreover, STDR learns the bases of training data from external data while JGSC obtains them from training data because STDR assumes learning with limited training data and JGSC does not make such assumption.…”
Section: Discussionmentioning
confidence: 99%
“…The algorithm STDR [18] employs same regularizers as our JGSC but utilizes robust loss function 2 . Moreover, STDR learns the bases of training data from external data while JGSC obtains them from training data because STDR assumes learning with limited training data and JGSC does not make such assumption.…”
Section: Discussionmentioning
confidence: 99%
“…By observing Eq.5, we find that both E i and D depend on the value ofW. In this paper, following the literatures [18,42], we design a novel iterative algorithm (i.e., Algorithm 1) to optimize Eq.4 and then prove its convergence. Here we introduce Theorem 1 to guarantee that Eq.4 monotonically decreases in each iteration of Algorithm 1.…”
Section: Optimizationmentioning
confidence: 95%
“…For example, the local feature HSV is less robust to the changes in frame rate, video length, captions. SIFT is sensitive to changes in contrast, brightness, scale, rotation, camera viewpoint, and so on [13,42].…”
Section: Introductionmentioning
confidence: 99%
“…As in most dictionary learning methods [45], [50], the LVE step in the proposed hashing algorithm is not jointly convex to Φ and Λ, but it is convex to each of them when another one is fixed. In the alternative optimization, the energy will decrease step by step.…”
Section: Algorithm 1: the Algorithm Of E-selve: Training Stagementioning
confidence: 99%