2019
DOI: 10.1007/s11280-019-00766-x
|View full text |Cite
|
Sign up to set email alerts
|

Robust SVM with adaptive graph learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
17
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
2

Relationship

2
4

Authors

Journals

citations
Cited by 83 publications
(17 citation statements)
references
References 38 publications
0
17
0
Order By: Relevance
“…As a consequence, the samples with large estimation errors are regarded as outliers and their influences are reduced. In the literature, a number of robust loss functions have been developed, including function, Cauchy function, and Geman–McClure estimator, and so on ( Hu, Zhu, Zhu, Gan, 2020 , Zhu, Gan, Lu, Li, Zhang, 2020 ). However, the robust loss function was not designed to explore the issue of the imbalance classification.…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…As a consequence, the samples with large estimation errors are regarded as outliers and their influences are reduced. In the literature, a number of robust loss functions have been developed, including function, Cauchy function, and Geman–McClure estimator, and so on ( Hu, Zhu, Zhu, Gan, 2020 , Zhu, Gan, Lu, Li, Zhang, 2020 ). However, the robust loss function was not designed to explore the issue of the imbalance classification.…”
Section: Methodsmentioning
confidence: 99%
“…(3) employs the ℓ 0 -norm constraint for each class to output exactly predefined non-zero elements. On the contrary, self-paced learning uses the ℓ 1 -norm constraint for all samples or other robust loss functions ( Zhu, Zhu, Zheng, 2020 , Hu, Zhu, Zhu, Gan, 2020 ) to estimate the sample weight without guaranteeing the exact number of non-zero elements. As a result, compared to self-paced learning only considering the sample weight to reduce the influence of outliers, our method takes into account the sample weight to remove outliers not to involve the process of the model construction as well as solve the problem of imbalance classification.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…However, the performance of data understanding is heavily dependent on the learning techniques. In addition to volume, data are naturally comprised of multiple representations or sources in real applications in which multi-source data provide enriched information from different perspectives [3,4]. Multi-source data understanding is hence one of the most interesting and hottest topics in research and business nowadays and is remarkably useful for practical applications.…”
mentioning
confidence: 99%