2018
DOI: 10.1016/j.image.2018.03.005
|View full text |Cite
|
Sign up to set email alerts
|

New semi-supervised classification using a multi-modal feature joint L21-norm based sparse representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 25 publications
0
3
0
Order By: Relevance
“…noise contamination including multiplicative speckle noise and additive white Gaussian noise, limited training resource, resolution variance, partial occlusion etc. [7–47]. In practice, however, it seems that most crucial EOCs are adding speckle noise, different depression angles and a limited number of the training samples.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…noise contamination including multiplicative speckle noise and additive white Gaussian noise, limited training resource, resolution variance, partial occlusion etc. [7–47]. In practice, however, it seems that most crucial EOCs are adding speckle noise, different depression angles and a limited number of the training samples.…”
Section: Resultsmentioning
confidence: 99%
“…Considering the above facts, efforts have been made to develop SRC algorithms. A summary can be presented as follows: (i) using kernel method to transfer samples to new higher dimension spaces where classes can be linearly discriminated [29–31], (ii) utilising manifold learning [32–34], (iii) fusing SRC with other classification methods [24, 35], (iv) using l2‐norm [36, 37] or other norms [38, 39] instead of l1‐norm in SRC, (v) acquiring a dictionary via DL methods instead of using training samples could be very effective in the SR and SRC results [20, 40–42]. Based on the latter facts, and making use of the Fisher criterion, Zhang and co‐authors [43, 44] introduced Fisher discriminative DL (FDDL).…”
Section: Introductionmentioning
confidence: 99%
“…where W 1 ∈ R d×m and W 2 ∈ R m×m are the coefficient matrices of each part, and λ 1 is the regularization parameter. As l 2,1 -norm is commonly employed as a sparse regularizer [35], where || W 1 || is a regularizer then forces sparsity on the matrix W .…”
Section: Label Distribution Learningmentioning
confidence: 99%