We consider the setting of online convex optimization (OCO) with exp-concave losses. The best regret bound known for this setting is O(n log T ), where n is the dimension and T is the number of prediction rounds (treating all other quantities as constants and assuming T is sufficiently large), and is attainable via the well-known Online Newton Step algorithm (ONS). However, ONS requires on each iteration to compute a projection (according to some matrix-induced norm) onto the feasible convex set, which is often computationally prohibitive in high-dimensional settings and when the feasible set admits a non-trivial structure. In this work we consider projection-free online algorithms for exp-concave and smooth losses, where by projection-free we refer to algorithms that rely only on the availability of a linear optimization oracle (LOO) for the feasible set, which in many applications of interest admits much more efficient implementations than a projection oracle. We present an LOO-based ONS-style algorithm, which using overall O(T ) calls to a LOO, guarantees in worst case regret bounded by O(n 2/3 T 2/3 ) (ignoring all quantities except for n, T ). However, our algorithm is most interesting in an important and plausible low-dimensional data scenario: if the gradients (approximately) span a subspace of dimension at most ρ, ρ << n, the regret bound improves to O(ρ 2/3 T 2/3 ), and by applying standard deterministic sketching techniques, both the space and average additional per-iteration runtime requirements are only O(ρn) (instead of O(n 2 )). This improves upon recently proposed LOO-based algorithms for OCO which, while having the same state-of-the-art dependence on the horizon T , suffer from regret/oracle complexity that scales with √ n or worse.
We present new efficient projection-free algorithms for online convex optimization (OCO), where by projection-free we refer to algorithms that avoid computing orthogonal projections onto the feasible set, and instead relay on different and potentially much more efficient oracles. While most state-of-the-art projection-free algorithms are based on the follow-the-leader framework, our algorithms are fundamentally different and are based on the online gradient descent algorithm with a novel and efficient approach to computing so-called infeasible projections. As a consequence, we obtain the first projection-free algorithms which naturally yield adaptive regret guarantees, i.e., regret bounds that hold w.r.t. any sub-interval of the sequence. Concretely, when assuming the availability of a linear optimization oracle (LOO) for the feasible set, on a sequence of length T , our algorithms guarantee O(T 3/4 ) adaptive regret and O(T 3/4 ) adaptive expected regret, for the full-information and bandit settings, respectively, using only O(T ) calls to the LOO. These bounds match the current state-of-the-art regret bounds for LOO-based projection-free OCO, which are not adaptive. We also consider a new natural setting in which the feasible set is accessible through a separation oracle. We present algorithms which, using overall O(T ) calls to the separation oracle, guarantee O( √ T ) adaptive regret and O(T 3/4 ) adaptive expected regret for the full-information and bandit settings, respectively.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.