2018
DOI: 10.1016/j.sigpro.2018.06.005
|View full text |Cite
|
Sign up to set email alerts
|

Learning a discriminative dictionary for classification with outliers

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
3
3

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 38 publications
0
3
0
Order By: Relevance
“…A key premise of FS is that the data contains redundant or irrelevant features, and thus removing those features does not result in loss of information in the prediction [46]. Dictionary learning denotes a LIP whose linear operator A and its representation m are learned from the observed data d, which exists in many applications such as image classification [47], outliers detection [48], and distributed CS [49].…”
Section: Lips With Different Structures Of Amentioning
confidence: 99%
“…A key premise of FS is that the data contains redundant or irrelevant features, and thus removing those features does not result in loss of information in the prediction [46]. Dictionary learning denotes a LIP whose linear operator A and its representation m are learned from the observed data d, which exists in many applications such as image classification [47], outliers detection [48], and distributed CS [49].…”
Section: Lips With Different Structures Of Amentioning
confidence: 99%
“…If any data point is extremely different from the whole data in a process, then it is marked as an outlier (Qi & Chen, 2018). Outliers may emerge because of various causes such as malicious attacks, environmental factors, human errors, abnormal conditions, measurement errors, and hardware malfunction.…”
Section: Introductionmentioning
confidence: 99%
“…Second, discriminative regularizations can be applied on the sparse coding vectors which are directly responsible for classification. The sparse codes can be regularized via class discrimination [20], [51], label consistency [18], graph regularization [21], [23], support discrimination [25], [52], [53],…”
Section: Objectives and Contributionsmentioning
confidence: 99%