In recent years, significant progress has been made in explaining the apparent hardness of improving upon the naive solutions for many fundamental polynomially solvable problems. This progress has come in the form of conditional lower bounds -reductions from a problem assumed to be hard. The hard problems include 3SUM, All-Pairs Shortest Path, SAT, Orthogonal Vectors, and others.In the (min, +)-convolution problem, the goal is to compute a sequence (c. This can easily be done in O(n 2 ) time, but no O(n 2−ε ) algorithm is known for ε > 0. In this paper, we undertake a systematic study of the (min, +)-convolution problem as a hardness assumption.First, we establish the equivalence of this problem to a group of other problems, including variants of the classic knapsack problem and problems related to subadditive sequences. The (min, +)-convolution problem has been used as a building block in algorithms for many problems, notably problems in stringology. It has also appeared as an ad hoc hardness assumption. Second, we investigate some of these connections and provide new reductions and other results. We also explain why replacing this assumption with the SETH might not be possible for some problems.
The subject of this paper is the time complexity of approximating Knapsack, Subset Sum, Partition, and some other related problems. The main result is an O(n + 1/ε 5/3 ) time randomized FPTAS for Partition, which is derived from a certain relaxed form of a randomized FPTAS for Subset Sum. To the best of our knowledge, this is the first NP-hard problem that has been shown to admit a subquadratic time approximation scheme, i.e., one with time complexity of O((n + 1/ε) 2−δ ) for some δ > 0. To put these developments in context, note that a quadratic FPTAS for Partition has been known for 40 years.Our main contribution lies in designing a mechanism that reduces an instance of Subset Sum to several simpler instances, each with some special structure, and keeps track of interactions between them. This allows us to combine techniques from approximation algorithms, pseudopolynomial algorithms, and additive combinatorics.We also prove several related results. Notably, we improve approximation schemes for 3SUM, (min, +)-convolution, and TreeSparsity. Finally, we argue why breaking the quadratic barrier for approximate Knapsack is unlikely by giving an Ω((n + 1/ε) 2−o(1) ) conditional lower bound.
In this paper, we show a construction of locality-sensitive hash functions without false negatives, i.e., which ensure collision for every pair of points within a given radius R in d dimensional space equipped with lp norm when p ∈ [1, ∞]. Furthermore, we show how to use these hash functions to solve the c-approximate nearest neighbor search problem without false negatives. Namely, if there is a point at distance R, we will certainly report it and points at distance greater than cR will not be reported for c = Ω( √ d, d1− 1 p ). The constructed algorithms work:• with preprocessing time O(n log(n)) and sublinear expected query time,• with preprocessing time O(poly(n)) and expected query time O(log(n)).Our paper reports progress on answering the open problem presented by Pagh [8], who considered the nearest neighbor search without false negatives for the Hamming distance.
Zwick's (1+ε)-approximation algorithm for the All Pairs Shortest Path (APSP) problem runs in time O( n ω ε log W ), where ω ≤ 2.373 is the exponent of matrix multiplication and W denotes the largest weight. This can be used to approximate several graph characteristics including the diameter, radius, median, minimum-weight triangle, and minimum-weight cycle in the same time bound.Since Zwick's algorithm uses the scaling technique, it has a factor log W in the running time. In this paper, we study whether APSP and related problems admit approximation schemes avoiding the scaling technique. That is, the number of arithmetic operations should be independent of W ; this is called strongly polynomial. Our main results are as follows. * Claim 5.6. We have d T r,b [2k], T r,b [2k + 1] > 1 ε for any level r, index k, and b ∈ {1, 2}. Proof. Because of how T r,1 , T r,2 are constructed, the chunks T r,b [2k] and T r,b [2k + 1] correspond to chunks T r [2k ′ ] and T r [2k ′ + 3] for some k ′ . The statement now follows from Claim 5.4. The following analogue of Claim 5.5 is immediate. Claim 5.7. For any x, y ∈ Z, if d(x, y) > 1 ε and x < y, then there exist a level r, index k, and b ∈ {1, 2} such that x ∈ T r,b [2k − 1] and y ∈ T r,b [2k].Proof. Consecutive chunks T r [2k − 1] and T r [2k] are either both added to T r,1 or both added to T r,2 . The statement thus follows from Claim 5.5.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.