We give improved algorithms for the ℓ p -regression problem, min x x p such that Ax = b, for all p ∈ (1, 2) ∪ (2, ∞). Our algorithms obtain a high accuracy solution in O p (mwhere each iteration requires solving an m × m linear system, with m being the dimension of the ambient space.Incorporating a procedure for maintaining an approximate inverse of the linear systems that we need to solve at each iteration, we give algorithms for solving ℓ p -regression to 1/poly(n) accuracy that runs in time O p (m max{ω,7/3} ), where ω is the matrix multiplication constant. For the current best value of ω > 2.37, this means that we can solve ℓ p regression as fast as ℓ 2 regression, for all constant p bounded away from 1.Our algorithms can be combined with nearly-linear time solvers for linear systems in graph Laplacians to give minimum ℓ p -norm flow / voltage solutions to 1/poly(n) accuracy on an * This paper has been published at SODA 2019 [Adi+], and was initially
We present faster high-accuracy algorithms for computing ℓ p -norm minimizing flows. On a graph with m edges, our algorithm can compute a (1 + 1/poly(m))-approximate unweighted ℓ p -norm minimizing flow with pm 1+ 1 p−1 +o(1) operations, for any p ≥ 2, giving the best bound for all p 5.24. Combined with the algorithm from the work of Adil et al. (SODA '19), we can now compute such flows for any 2 ≤ p ≤ m o(1) in time at most O(m 1.24 ). In comparison, the previous best running time was Ω(m 1.33 ) for large constant p. For p ∼ δ −1 log m, our algorithm computes a (1+δ)-approximate maximum flow on undirected graphs using m 1+o(1) δ −1 operations, matching the current best bound, albeit only for unit-capacity graphs.We also give an algorithm for solving general ℓ p -norm regression problems for large p. Our algorithm makes pm 1 3 +o(1) log 2 (1/ε) calls to a linear solver. This gives the first high-accuracy algorithm for computing weighted ℓ p -norm minimizing flows that runs in time o(m 1.5 ) for some p = m Ω(1) .Our key technical contribution is to show that smoothed ℓ p -norm problems introduced by Adil et al., are interreducible for different values of p. No such reduction is known for standard ℓ p -norm problems. an iterative refinement scheme for ℓ p -norm regression, giving a running time of O(p O(p) ·m p−2 3p−2 +1 ) ≤ O(p O(p) ·m 4 /3 ). Building on the work of Adil et al., Kyng et al.. [Kyn+19] designed an algorithmApproximating Max-Flow. For p ≥ log m, ℓ p norms approximate ℓ ∞ , and hence the above algorithm returns an approximate maximum-flow. For p = Θ log m δ , this gives a m 1+o(1) δ −1operations algorithm for computing a (1 + δ)-approximation to maximum-flow problem on unitcapacity graphs.Corollary 1.2. Given an (undirected) graph G with m edges with unit capacities, a demand vector d , and δ > 0, we can compute a flow f that satisfies the demands, i.e., B ⊤ f = d such thatThis gives another approach for approximating maximum flow with a δ −1 dependence on the approximation achieved in the recent works of Sherman [She17] and Sidford-Tian [ST18], albeit only for unit-capacity graphs, and with a m o(1) factor instead of poly(log m). To compute maxflow essentially exactly on unit-capacity graphs, one needs to compute p-norm minimizing flows for p = m.
In this work, we present new simple and optimal algorithms for solving the variational inequality (VI) problem for p th -order smooth, monotone operators -a problem that generalizes convex optimization and saddle-point problems. Recent works (Bullins and Lai (2020), Lin and Jordan (2021), Jiang and Mokhtari ( 2022)) present methods that achieve a rate of O(ε −2/(p+1) ) for p ≥ 1, extending results by (Nemirovski ( 2004)) and (Monteiro and Svaiter ( 2012)) for p = 1, 2. A drawback to these approaches, however, is their reliance on a line search scheme. We provide the first p th -order method that achieves a rate of O(ε −2/(p+1) ). Our method does not rely on a line search routine, thereby improving upon previous rates by a logarithmic factor. Building on the Mirror Prox method of Nemirovski (2004), our algorithm works even in the constrained, non-Euclidean setting. Furthermore, we prove the optimality of our algorithm by constructing matching lower bounds. These are the first lower bounds for smooth MVIs beyond convex optimization for p > 1. This establishes a separation between solving smooth MVIs and smooth convex optimization, and settles the oracle complexity of solving p th -order smooth MVIs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.