Measuring the importance of a node in a network is a major goal in the analysis of social networks, biological systems, transportation networks etc. Different centrality measures have been proposed to capture the notion of node importance. For example, the center of a graph is a node that minimizes the maximum distance to any other node (the latter distance is the radius of the graph). The median of a graph is a node that minimizes the sum of the distances to all other nodes. Informally, the betweenness centrality of a node w measures the fraction of shortest paths that have w as an intermediate node.Finally, the reach centrality of a node w is the smallest distance r such that any s-t shortest path passing through w has either s or t in the ball of radius r around w.The fastest known algorithms to compute the center and the median of a graph, and to compute the betweenness or reach centrality even of a single node take roughly cubic time in the number n of nodes in the input graph. It is open whether these problems admit truly subcubic algorithms, i.e. algorithms with running timeÕ(n 3−δ ) for some constant δ > 0 1 . We relate the complexity of the mentioned centrality problems to two classical problems for which no truly subcubic algorithm is known, namely All Pairs Shortest Paths (APSP) and Diameter. We show that Radius, Median and Betweenness Centrality are equivalent under subcubic reductions to APSP, i.e. that a truly subcubic algorithm for any of these problems implies a truly subcubic algorithm for all of them. We then show that Reach Centrality is equivalent to Diameter under subcubic reductions. The same holds for the problem of approximating Betweenness Centrality within any constant factor. Thus the latter two centrality problems could potentially be solved in truly subcubic time, even if APSP requires essentially cubic time.
The Steiner tree problem is one of the most fundamental NP -hard problems: given a weighted undirected graph and a subset of terminal nodes, find a minimum-cost tree spanning the terminals. In a sequence of papers, the approximation ratio for this problem was improved from 2 to 1.55 [Robins and Zelikovsky 2005]. All these algorithms are purely combinatorial. A long-standing open problem is whether there is an LP relaxation of Steiner tree with integrality gap smaller than 2 [Rajagopalan and Vazirani 1999]. In this article we present an LP-based approximation algorithm for Steiner tree with an improved approximation factor. Our algorithm is based on a, seemingly novel, iterative randomized rounding technique. We consider an LP relaxation of the problem, which is based on the notion of directed components. We sample one component with probability proportional to the value of the associated variable in a fractional solution: the sampled component is contracted and the LP is updated consequently. We iterate this process until all terminals are connected. Our algorithm delivers a solution of cost at most ln(4) + ε < 1.39 times the cost of an optimal Steiner tree. The algorithm can be derandomized using the method of limited independence. As a by-product of our analysis, we show that the integrality gap of our LP is at most 1.55, hence answering the mentioned open question.
The Steiner tree problem is one of the most fundamental NP-hard problems: given a weighted undirected graph and a subset of terminal nodes, find a minimum-cost tree spanning the terminals. In a sequence of papers, the approximation ratio for this problem was improved from 2 to the current best 1.55 [Robins,. All these algorithms are purely combinatorial. A long-standing open problem is whether there is an LP-relaxation for Steiner tree with integrality gap smaller than 2 [Vazirani, .In this paper we improve the approximation factor for Steiner tree, developing an LP-based approximation algorithm. Our algorithm is based on a, seemingly novel, iterative randomized rounding technique. We consider a directedcomponent cut relaxation for the k-restricted Steiner tree problem. We sample one of these components with probability proportional to the value of the associated variable in the optimal fractional solution and contract it. We iterate this process for a proper number of times and finally output the sampled components together with a minimum-cost terminal spanning tree in the remaining graph. Our algorithm delivers a solution of cost at most ln(4) times the cost of an optimal k-restricted Steiner tree. This directly implies a ln(4) + ε < 1.39 approximation for Steiner tree.As a byproduct of our analysis, we show that the integrality gap of our LP is at most 1.55, hence answering to the * Extended abstract.
Many polynomial-time solvable combinatorial optimization problems become NP-hard if an additional complicating constraint is added to restrict the set of feasible solutions. In this paper, we consider two such problems, namely maximumweight matching and maximum-weight matroid intersection with one additional budget constraint. We present the first polynomial-time approximation schemes for these problems. Similarly to other approaches for related problems, our schemes compute two solutions to the Lagrangian relaxation of the problem and patch them together to obtain a near-optimal solution. However, due to the richer combinatorial structure of the problems considered here, standard patching techniques do not apply. To circumvent this problem, we crucially exploit the adjacency relations on the solution polytope and, surprisingly, the solution to an old combinatorial puzzle.
For more than 40 years, Branch & Reduce exponential-time backtracking algorithms have been among the most common tools used for finding exact solutions of NP-hard problems. Despite that, the way to analyze such recursive algorithms is still far from producing tight worst-case running time bounds. Motivated by this, we use an approach, that we call "Measure & Conquer", as an attempt to step beyond such limitations. The approach is based on the careful design of a nonstandard measure of the subproblem size; this measure is then used to lower bound the progress made by the algorithm at each branching step. The idea is that a smarter measure may capture behaviors of the algorithm that a standard measure might not be able to exploit, and hence lead to a significantly better worst-case time analysis.In order to show the potentialities of Measure & Conquer, we consider two well-studied NP-hard problems: minimum dominating set and maximum independent set. For the first problem, we consider the current best algorithm, and prove (thanks to a better measure) a much tighter running time bound for it. For the second problem, we describe a new, simple algorithm, and show that its running time is Preliminary parts of this article appeared in competitive with the current best time bounds, achieved with far more complicated algorithms (and standard analysis).Our examples show that a good choice of the measure, made in the very first stages of exact algorithms design, can have a tremendous impact on the running time bounds achievable.
Davis-Putnam-style exponential-time backtracking algorithms are the most common algorithms used for finding exact solutions of NP-hard problems. The analysis of such recursive algorithms is based on the bounded search tree technique: a measure of the size of the subproblems is defined; this measure is used to lower bound the progress made by the algorithm at each branching step.For the last 30 years the research on exact algorithms has been mainly focused on the design of more and more sophisticated algorithms. However, measures used in the analysis of backtracking algorithms are usually very simple. In this paper we stress that a more careful choice of the measure can lead to significantly better worst case time analysis.As an example, we consider the minimum dominating set problem. The currently fastest algorithm for this problem has running time O(2 0.850n ) on nnodes graphs. By measuring the progress of the (same) algorithm in a different way, we refine the time bound to O(2 0.598n ). A good choice of the measure can provide such a (surprisingly big) improvement; this suggests that the running time of many other exponential-time recursive algorithms is largely overestimated because of a "bad" choice of the measure.
We present a simple randomized algorithmic framework for connected\ud facility location problems. The basic idea is as follows:\ud We run a black-box approximation algorithm for the unconnected\ud facility location problem, randomly sample the clients, and open the\ud facilities serving sampled clients in the approximate solution.\ud Via a novel analytical tool, which we term core detouring,\ud we show that this approach significantly improves over the\ud previously best known approximation ratios for several NP-hard\ud network design problems. For example, we reduce the approximation\ud ratio for the connected facility location problem from 8.55 to\ud 4.00 and for the single-sink rent-or-buy problem from 3.55 to\ud 2.92. \ud The mentioned results can be derandomized at the expense of a slightly worse approximation ratio. \ud The versatility of our framework is demonstrated by devising\ud improved approximation algorithms also for other related problems
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.