2020
DOI: 10.1609/aaai.v34i04.6029
|View full text |Cite
|
Sign up to set email alerts
|

Quadruply Stochastic Gradient Method for Large Scale Nonlinear Semi-Supervised Ordinal Regression AUC Optimization

Abstract: Semi-supervised ordinal regression (S2OR) problems are ubiquitous in real-world applications, where only a few ordered instances are labeled and massive instances remain unlabeled. Recent researches have shown that directly optimizing concordance index or AUC can impose a better ranking on the data than optimizing the traditional error rate in ordinal regression (OR) problems. In this paper, we propose an unbiased objective function for S2OR AUC optimization based on ordinal binary decomposition approach. Besi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
7
2

Relationship

3
6

Authors

Journals

citations
Cited by 11 publications
(5 citation statements)
references
References 21 publications
0
5
0
Order By: Relevance
“…Hitherto, a substantial amount of efforts have been made to explore the AUC optimization method/theory for the multipartite ranking problem [11], [13], [24], [63], [69]. Moreover, a recent work [66] proposes a novel nonlinear semi-supervised multipartite ranking problem for large-scale datasets. It is noteworthy that multipartite ranking approaches could solve multiclass problems only if the classes are ordinal values for the same semantic concept.…”
Section: Related Workmentioning
confidence: 99%
“…Hitherto, a substantial amount of efforts have been made to explore the AUC optimization method/theory for the multipartite ranking problem [11], [13], [24], [63], [69]. Moreover, a recent work [66] proposes a novel nonlinear semi-supervised multipartite ranking problem for large-scale datasets. It is noteworthy that multipartite ranking approaches could solve multiclass problems only if the classes are ordinal values for the same semantic concept.…”
Section: Related Workmentioning
confidence: 99%
“…A convergence rate in the order O ( 1𝑇 ) for the optimization error was established for the strongly regularized empirical AUC maximization problem. The work [131] also considers a similar algorithm for semi-supervised ordinal regression based on AUC optimization.…”
Section: Stochastic Auc Maximization -The Third Agementioning
confidence: 99%
“…As a result, f t+1 may be outside of RKHS H, making it hard to directly evaluate the error between f t+1 (•) and the optimal solution f * . In this case, we utilize h t+1 (•) as an intermediate value to decompose the difference between f t+1 and f * (Shi et al 2020):…”
Section: Convergence Analysismentioning
confidence: 99%