2020
DOI: 10.48550/arxiv.2003.08109
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Efficient improper learning for online logistic regression

Rémi Jézéquel,
Pierre Gaillard,
Alessandro Rudi

Abstract: We consider the setting of online logistic regression and consider the regret with respect to the 2 -ball of radius B. It is known (see [Hazan et al., 2014]) that any proper algorithm which has logarithmic regret in the number of samples (denoted n) necessarily suffers an exponential multiplicative constant in B. In this work, we design an efficient improper algorithm that avoids this exponential constant while preserving a logarithmic regret. Indeed, [Foster et al., 2018] showed that the lower bound does not… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
5
0

Year Published

2020
2020
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(5 citation statements)
references
References 4 publications
0
5
0
Order By: Relevance
“…Yet these algorithms are still computationally intensive; assuming our theoretical results are predictive of actual performance, one might expect that aggregating-type strategies could still yield improvements over standard empirical risk minimization. Indeed, Jézéquel et al [27] take the computational difficulty of the approximating algorithms in the paper [14] as motivation to develop an efficient improper learning algorithm for the special case of logistic regression, which (roughly) hedges its predictions by pretending to receive both positive and negative examples in future time steps, constructing a loss that depends explicitly on the new data x t ; Jézéquel et al show that it achieves a regret bound with a multiplicative RB factor of the logarithmic regret in Corollary 4. It is unclear how to extend this approach to situations in which the cardinality |Y| of Y is much larger than 1, though this is an interesting question for future work.…”
Section: Experiments and Implementation Detailsmentioning
confidence: 99%
See 1 more Smart Citation
“…Yet these algorithms are still computationally intensive; assuming our theoretical results are predictive of actual performance, one might expect that aggregating-type strategies could still yield improvements over standard empirical risk minimization. Indeed, Jézéquel et al [27] take the computational difficulty of the approximating algorithms in the paper [14] as motivation to develop an efficient improper learning algorithm for the special case of logistic regression, which (roughly) hedges its predictions by pretending to receive both positive and negative examples in future time steps, constructing a loss that depends explicitly on the new data x t ; Jézéquel et al show that it achieves a regret bound with a multiplicative RB factor of the logarithmic regret in Corollary 4. It is unclear how to extend this approach to situations in which the cardinality |Y| of Y is much larger than 1, though this is an interesting question for future work.…”
Section: Experiments and Implementation Detailsmentioning
confidence: 99%
“…Thus, we seek η such that 1 2p 3/2 ≥ η(1 − 1/ η(p 3/2 − 2p + √ p)for all p ∈ [0, 1]. Letting β = √ p and solving for the stationary points ofβ 3 − 2β + β at √ p = β = 1/we see it is sufficient that 1 ≥ 2η(1/27 − 2/9 + 1/3) = 8 27 η, or η ≤27 For L quad , we have h(p) = 1 , so it suffices that I − η(p − e y )(p − e y ) T 0, or η ≤ 1 2 .…”
mentioning
confidence: 98%
“…Below, we provide two results from Jézéquel et al [16] for online kernel regression with square loss. Kernel-AWV Jézéquel et al [17] computes the following estimator.…”
Section: Non-stationary Online Kernel Regressionmentioning
confidence: 99%
“…Theorem 8 (Proposition 1,[16]) Let λ, Y ≥ 0, X ⊂ R d and Y ⊂ [−Y, Y ].For any RKHS H, for n ≥ 1, for any arbitrary sequence of observations (x 1 , y 1 ), • • • , (x n , y n ) ∈ X × Y, the regret of Kernel-AWV (Equation (18),[17]) is upper-bounded for all θ ∈ H as…”
mentioning
confidence: 99%
See 1 more Smart Citation