2020
DOI: 10.3934/cpaa.2020188
|View full text |Cite
|
Sign up to set email alerts
|

Stochastic AUC optimization with general loss

Abstract: Recently, there is considerable work on developing efficient stochastic optimization algorithms for AUC maximization. However, most of them focus on the least square loss which may be not the best option in practice. The main difficulty for dealing with the general convex loss is the pairwise nonlinearity w.r.t. the sampling distribution generating the data. In this paper, we use Bernstein polynomials to uniformly approximate the general losses which are able to decouple the pairwise nonlinearity. In particula… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
9
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 7 publications
(9 citation statements)
references
References 17 publications
0
9
0
Order By: Relevance
“…Table 5. Comparison of different studies for stochastic AUC maximization algorithms, where 𝑇 is the total number of iterations, 𝑑 denotes the dimensionality of the input data, 𝐡 is the batch size used in [167] on which the parameter 𝜌 + π‘˜ (𝐡) is dependent, π‘š is the degree of Bernstein polynomials to approximate the general convex loss used in [164], 𝜌 + π‘˜ (𝐡) and 𝜌 βˆ’ π‘˜ are RSC and RSS parameters used in [167], and 𝛽, 𝑀 and πœ‚ respectively denote the strong-convexity parameter, the smooth parameter and the constant step size in [31]. Opt.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…Table 5. Comparison of different studies for stochastic AUC maximization algorithms, where 𝑇 is the total number of iterations, 𝑑 denotes the dimensionality of the input data, 𝐡 is the batch size used in [167] on which the parameter 𝜌 + π‘˜ (𝐡) is dependent, π‘š is the degree of Bernstein polynomials to approximate the general convex loss used in [164], 𝜌 + π‘˜ (𝐡) and 𝜌 βˆ’ π‘˜ are RSC and RSS parameters used in [167], and 𝛽, 𝑀 and πœ‚ respectively denote the strong-convexity parameter, the smooth parameter and the constant step size in [31]. Opt.…”
Section: Discussionmentioning
confidence: 99%
“…Recently, the work [164] also proposes stochastic primal-dual algorithm for solving AUC maximization with a general convex pairwise loss. They propose to use Bernstein polynomials [117] to uniformly approximate a general loss.…”
Section: Stochastic Auc Maximization -The Third Agementioning
confidence: 99%
“…[59], [60] further accelerate this framework with tighter convergence rates. On top of the reformulation framework, [75] also provides an acceleration framework for general loss functions where the loss functions are approximated by the Bernstein polynomials. [18] proposes a novel large-scale nonlinear AUC maximization method based on the triply stochastic gradient descent algorithm.…”
Section: Related Workmentioning
confidence: 99%
“…Unfortunately, the lossless acceleration for general losses is impossible in general. In this subsection, following the spirit of [75], we provide a discussion on how to accelerate general loss functions approximately. However, we can, in turn, construct an approximation framework based on the Bernstein Polynomials.…”
Section: G4 Acceleration Scheme For General Lossesmentioning
confidence: 99%
See 1 more Smart Citation