In this paper we show how to accelerate randomized coordinate descent methods and achieve faster convergence rates without paying per-iteration costs in asymptotic running time. In particular, we show how to generalize and efficiently implement a method proposed by Nesterov, giving faster asymptotic running times for various algorithms that use standard coordinate descent as a black box. In addition to providing a proof of convergence for this new general method, we show that it is numerically stable, efficiently implementable, and in certain regimes, asymptotically optimal.To highlight the computational power of this algorithm, we show how it can used to create faster linear system solvers in several regimes:• We show how this method achieves a faster asymptotic runtime than conjugate gradient for solving a broad class of symmetric positive definite systems of equations.• We improve the best known asymptotic convergence guarantees for Kaczmarz methods, a popular technique for image reconstruction and solving overdetermined systems of equations, by accelerating a randomized algorithm of Strohmer and Vershynin.• We achieve the best known running time for solving Symmetric Diagonally Dominant (SDD) system of equations in the unit-cost RAM model, obtaining an O(m log 3/2 n √ log log n log( log n )) asymptotic running time by accelerating a recent solver by Kelner et al.Beyond the independent interest of these solvers, we believe they highlight the versatility of the approach of this paper and we hope that they will open the door for further algorithmic improvements in the future.
In this paper, we introduce a new framework for approximately solving flow problems in capacitated, undirected graphs and apply it to provide asymptotically faster algorithms for the maximum s-t flow and maximum concurrent multicommodity flow problems. For graphs with n vertices and m edges, it allows us to find an ε-approximate maximum s-t flow in time O(m 1+o(1) ε −2 ), improving on the previous best bound of O(mn 1/3 poly(ε −1 )). Applying the same framework in the multicommodity setting solves a maximum concurrent multicommodity flow problem withOur algorithms utilize several new technical tools that we believe may be of independent interest:• We give a non-Euclidean generalization of gradient descent and provide bounds on its performance. Using this, we show how to reduce approximate maximum flow and maximum concurrent flow to oblivious routing.• We define and provide an efficient construction of a new type of flow sparsifier. Previous sparsifier constructions approximately preserved the size of cuts and, by duality, the value of the maximum flows as well. However, they did not provide any direct way to route flows in the sparsifier G back in the original graph G, leading to a longstanding gap between the efficacy of sparsification on flow and cut problems. We ameliorate this by constructing a sparsifier G that can be embedded (very efficiently) into G with low congestion, allowing one to transfer flows from G back to G.• We give the first almost-linear-time construction of an O(m o(1) )-competitive oblivious routing scheme. No previous such algorithm ran in time better than Ω(mn). By reducing the running time to almost-linear, our work provides a powerful new primitive for constructing very fast graph algorithms.The interested reader is referred to the full version of the paper [8] for a more complete treatment of these results.
We consider the adversarial convex bandit problem and we build the first poly(T )-time algorithm with poly(n) √ T -regret for this problem. To do so we introduce three new ideas in the derivative-free optimization literature: (i) kernel methods, (ii) a generalization of Bernoulli convolutions, and (iii) a new annealing schedule for exponential weights (with increasing learning rate). The basic version of our algorithm achieves O(n 9.5 √ T )-regret, and we show that a simple variant of this algorithm can be run in poly(n log(T ))-time per step at the cost of an additional poly(n)T o(1) factor in the regret. These results improve upon the O(n 11 √ T )-regret and exp(poly(T ))-time result of the first two authors, and the log(T ) poly(n) √ T -regret and log(T ) poly(n) -time result of Hazan and Li. Furthermore we conjecture that another variant of the algorithm could achieve O(n 1.5 √ T )-regret, and moreover that this regret is unimprovable (the current best lower bound being Ω(n √ T ) and it is achieved with linear functions). For the simpler situation of zeroth order stochastic convex optimization this corresponds to the conjecture that the optimal query complexity is of order n 3 /ε 2 .
In this paper, we consider the following inverse maintenance problem: given A ∈ R n×d and a number of rounds r, at round k, we receive a n×n diagonal matrix D (k) and we wish to maintain an efficient linear system solver fordoes not change too rapidly. This inverse maintenance problem is the computational bottleneck in solving multiple optimization problems. We show how to solve this problem withÕ (nnz(A) + d ω ) preprocessing time and amortizedÕ(nnz(A) + d2 ) time per round, improving upon previous running times. Consequently, we obtain the fastest known running times for solving multiple problems including, linear programming and computing a rounding of a polytope. In particular given a feasible point in a linear program with n variables, d constraints, and constraint matrix A ∈ R d×n , we show how to solve the linear program in timeÕ(). We achieve our results through a novel combination of classic numerical techniques of low rank update, preconditioning, and fast matrix multiplication as well as recent work on subspace embeddings and spectral sparsification that we hope will be of independent interest.
We propose a new method for unconstrained optimization of a smooth and strongly convex function, which attains the optimal rate of convergence of Nesterov's accelerated gradient descent. The new algorithm has a simple geometric interpretation, loosely inspired by the ellipsoid method. We provide some numerical evidence that the new method can be superior to Nesterov's accelerated gradient descent.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.