Graphs with bounded highway dimension were introduced by Abraham et al. [SODA 2010] as a model of transportation networks. We show that any such graph can be embedded into a distribution over bounded treewidth graphs with arbitrarily small distortion. More concretely, given a weighted graph G = (V, E) of constant highway dimension, we show how to randomly compute a weighted graph H = (V, E ) that distorts shortest path distances of G by at most a 1 + ε factor in expectation, and whose treewidth is polylogarithmic in the aspect ratio of G. Our probabilistic embedding implies quasi-polynomial time approximation schemes for a number of optimization problems that naturally arise in transportation networks, including Travelling Salesman, Steiner Tree, and Facility Location.To construct our embedding for low highway dimension graphs we extend Talwar's [STOC 2004] embedding of low doubling dimension metrics into bounded treewidth graphs, which generalizes known results for Euclidean metrics. We add several non-trivial ingredients to Talwar's techniques, and in particular thoroughly analyse the structure of low highway dimension graphs. Thus we demonstrate that the geometric toolkit used for Euclidean metrics extends beyond the class of low doubling metrics.following formal definition, if dist(u, v) denotes the shortest-path distance between vertices u and v, let B r (v) = {u ∈ V |dist(u, v) ≤ r} be the ball of radius r centred at v. We will also say that a path P lies inside B r (v) if all its vertices lie inside B r (v).Definition 1.1. The highway dimension of a graph G is the smallest integer k such that, for some universal constant c ≥ 4, for every r ∈ R + , and every ball B cr (v) of radius cr, there are at most k vertices in B cr (v) hitting all shortest paths of length more than r that lie in B cr (v).Rather than working with the above definition directly, we often consider the closely related notion of shortest path covers (also introduced in [1]).Definition 1.2. For a graph G and r ∈ R + , a shortest path cover spc(r) ⊆ V is a set of hubs that intersect all shortest paths of length in (r, cr/2] of G. Such a cover is called locally s-sparse for scale r, if no ball of radius cr/2 contains more than s vertices from spc(r).
We study the k-BALANCED PARTITIONING problem in which the vertices of a graph are to be partitioned into k sets of size at most n/k while minimising the cut size, which is the number of edges connecting vertices in different sets. The problem is well studied for general graphs, for which it cannot be approximated within any factor in polynomial time. However, little is known about restricted graph classes. We show that for trees k-BALANCED PARTITIONING remains surprisingly hard. In particular, approximating the cut size is APX-hard even if the maximum degree of the tree is constant. If instead the diameter of the tree is bounded by a constant, we show that it is NP-hard to approximate the cut size within n c , for any constant c < 1. In the face of the hardness results, we show that allowing near-balanced solutions, in which there are at most (1 + ε) n/k vertices in any of the k sets, admits a PTAS for trees. Remarkably, the computed cut size is no larger than that of an optimal balanced solution. In the final section of our paper, we harness results on embedding graph metrics into tree metrics to extend our PTAS for trees to general graphs. In addition to being conceptually simpler and easier to analyse, our scheme improves the best factor known on the cut size of near-balanced solutions from O(log 1.5 (n)/ε 2 ) [Andreev and Räcke TCS 2006] to O(log n), for weighted graphs. This also settles a question posed by Andreev and Räcke of whether an algorithm with approximation guarantees on the cut size independent from ε exists.
In this paper we study the hardness of the k-Center problem on inputs that model transportation networks. For the problem, a graph G = (V, E) with edge lengths and an integer k are given and a center set C ⊆ V needs to be chosen such that |C| ≤ k. The aim is to minimize the maximum distance of any vertex in the graph to the closest center. This problem arises in many applications of logistics, and thus it is natural to consider inputs that model transportation networks. Such inputs are often assumed to be planar graphs, low doubling metrics, or bounded highway dimension graphs. For each of these models, parameterized approximation algorithms have been shown to exist. We complement these results by proving that the k-Center problem is W[1]-hard on planar graphs of constant doubling dimension, where the parameter is the combination of the number of centers k, the highway dimension h, and the pathwidth p. Moreover, under the Exponential Time Hypothesis there is no f (k, p, h) · n o(p+ √ k+h) time algorithm for any computable function f . Thus it is unlikely that the optimum solution to k-Center can be found efficiently, even when assuming that the input graph abides to all of the above models for transportation networks at once! Additionally we give a simple parameterized (1+ε)-approximation algorithm for inputs of doubling dimension d with runtime (k k /ε O(kd) ) · n O(1) . This generalizes a previous result, which considered inputs in D-dimensional Lq metrics.
We consider the classic Facility Location, k-Median, and k-Means problems in metric spaces of doubling dimension d. We give nearly linear-time approximation schemes for each problem. The complexity of our algorithms is 2 plogp1{εq{εq Opd 2 q n log 4 n`2 Opdq n log 9 n, making a significant improvement over the state-of-the-art algorithms which run in time n pd{εq Opdq .Moreover, we show how to extend the techniques used to get the first efficient approximation schemes for the problems of prize-collecting k-Medians and k-Means, and efficient bicriteria approximation schemes for k-Medians with outliers, k-Means with outliers and k-Center.help to handle some noise from the input: the k-Median objective can be dramatically perturbed by the addition of a few distant clients, which must then be discarded. Our resultsWe solve this open problem by proposing the first near-linear time algorithms for the k-Median and k-Means problems in metrics of fixed doubling dimension. More precisely, we show the following theorems, where we let f pεq " p1{εq 1{ε .Theorem 1.1. For any 0 ă ε ă 1{3, there exists a randomized p1`εq-approximation algorithm for k-Median in metrics of doubling dimension d with running time f pεq 2 Opd 2 q n log 4 n`2 Opdq n log 9 n and success probability at least 1´ε.Theorem 1.2. For any 0 ă ε ă 1{3, there exists a randomized p1`εq-approximation algorithm for k-Means in metrics of doubling dimension d with running time f pεq 2 Opd 2 q n log 5 n`2 Opdq n log 9 n and success probability at least 1´ε.Our results also extend to the Facility Location problem, in which no bound on the number of opened centers is given, but each center comes with an opening cost. The aim is to minimize the sum of the (1st power) of the distances from each point of the metric to its closest center, in addition to the total opening costs of all used centers.Theorem 1.3. For any 0 ă ε ă 1{3, there exists a randomized p1`εq-approximation algorithm for Facility Location in metrics of doubling dimension d with running time f pεq 2 Opd 2 q¨n`2 Opdq n log n and success probability at least 1´ε.In all these theorems, we make the common assumption to have access to the distances of the metric in constant time, as, e.g., in [18,27,29]. This assumption is discussed in Bartal et al. [9].Note that the double-exponential dependence on d is unavoidable unless P = NP, since the problems are APX-hard in Euclidean space of dimension d " Oplog nq. For Euclidean inputs, our algorithms for the k-Means and k-Median problems outperform the ones of Cohen-Addad [15], removing in particular the dependence on k, and the one of Kolliopoulos and Rao [32] when d ą 3, by removing the dependence on log d`6 n. Interestingly, for k " ωplog 9 nq our algorithm for the k-Means problem is faster than popular heuristics like k-Means++ which runs in time Opnkq in Euclidean space. We note that the success probability can be boosted to 1´ε δ by repeating the algorithm log δ times and outputting the best solution encountered.After proving the three theorems above, we will a...
Parameterization and approximation are two popular ways of coping with NP-hard problems. More recently, the two have also been combined to derive many interesting results. We survey developments in the area both from the algorithmic and hardness perspectives, with emphasis on new techniques and potential future research directions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.