The vertex k-center problem is a classical NP-Hard optimization problem with application to Facility Location and Clustering among others. This problem consists in finding a subset C ⊆ V of an input graph G = (V , E), such that the distance from the farthest vertex in V to its nearest center in C is minimized, where |C| ≤ k, with k ∈ Z + as part of the input. Many heuristics, metaheuristics, approximation algorithms, and exact algorithms have been developed for this problem. This paper presents an analytical study and experimental evaluation of the most representative approximation algorithms for the vertex k-center problem. For each of the algorithms under consideration and using a common notation, we present proofs of their corresponding approximation guarantees as well as examples of tight instances of such approximation bounds, including a novel tight example for a 3-approximation algorithm. Lastly, we present the results of extensive experiments performed over de facto benchmark data sets for the problem which includes instances of up to 71009 vertices. INDEX TERMS Approximation algorithms, k-center problem, polynomial time heuristics. I. INTRODUCTION Perhaps one of the first center selection problems for which there is historical register is the following: ''given three points in the plane, find a fourth point such that the sum of its distances to the three points is minimized'' [1]. Given its simplicity, it is hard to establish who first stated this problem. However, this problem is usually associated to Pierre de Fermat, who asked this question around 1636, and its first registered solution is associated to Evangelista Torricelli [1]. An extension of this problem is known as the Weber's problem, where the points have an associated cost and the goal is to locate not 1 but k centers [1]. By adding new properties and restrictions to a basic k-center problem, the collection of k-center problems have become larger over the years. One of the basic center selection problems that more directly gave rise to many other center problems is known The associate editor coordinating the review of this manuscript and approving it for publication was Diego Oliva.
The capacitated vertex k-center problem receives as input a complete weighted graph and a set of capacity constraints. Its goal is to find a set of k centers and an assignment of vertices that does not violate the capacity constraints. Furthermore, the distance from the farthest vertex to its assigned center has to be minimized. The capacitated vertex k-center problem models real situations where a maximum number of clients must be assigned to centers and the travel time or distance from the clients to their assigned center has to be minimized. These centers might be hospitals, schools, police stations, among many others. The goal of this paper is to explicitly state how the capacitated vertex k-center problem and the minimum capacitated dominating set problem are related. We present an exact algorithm that consists of solving a series of integer programming formulations equivalent to the minimum capacitated dominating set problem over the bottleneck input graph. Lastly, we present an empirical evaluation of the proposed algorithm using off-the-shelf optimization software.
Bad air quality due to free pollutants such as particulate matter (PM), carbon dioxide (CO 2 ), nitrogen oxides (NO x ) and volatile organic components (VOC) increases the risk of long- term health diseases. The impact of traffic-calming measures on air quality has been studied using specialized equipment at control sites or mounted on cars to monitor pollutants levels. However, this approach suffers from a large number of variables on the experiments such as vehicles types, number of monitored vehicles, driver’s behavior, traffic density, time of the day, elapsed monitoring time, road conditions and weather. In this work, we use a cellular automata and an instantaneous traffic emissions model to capture the effect of speed humps on traffic flow and on the generation of CO 2 , NO x , VOC and PM pollutants. This approach allows us to study and characterize the effect of many speed humps on a single lane. We found that speed humps significantly promote the generation of pollutants when the number of vehicles on a lane is low. Our results may provide insight into urban planning strategies to reduce the generation of traffic emissions and lower the risk of long-term health diseases.
Resumen. En este artículo se presenta un algoritmo aleatorizado para el problema de planificación de tareas compuestas por procesos con restricciones de precedencia en ambientes distribuidos tipo Grid. El algoritmo aleatorizado propuesto está basado en una nueva técnica que hemos denominado como de distribuciones deslizantes, la cual busca combinar las ventajas de los algoritmos de aproximación deterministas y de los algoritmos aleatorizados tipo Montecarlo. El objetivo es proveer un algoritmo que con alta probabilidad entregue soluciones ρ-aproximadas, pero que al mismo tiempo tenga la capacidad de analizar el vecindario extendido de dichas soluciones para escapar de máximos o mínimos locales. En el artículo se demuestra que el algoritmo propuesto es correcto y se caracteriza de manera formal su complejidad temporal. Así mismo, se evalúa el desempeño del algoritmo por medio de una serie de experimentos basados en simulaciones. Los experimentos muestran que el algoritmo propuesto logra en general un desempeño superior al de los algoritmos que componen el estado del arte en planificación en sistemas Grid. Las métricas de desempeño utilizadas son retardo promedio, retardo máximo y utilización de la Grid. Palabras clave. Optimización combinatoria, algoritmo aleatorio, sistemas Grid, planificación de tareas.
The uniform capacitated vertex k -center problem is an \(\mathcal {NP} \) -hard combinatorial optimization problem that models real situations where k centers can only attend a maximum number of customers, and the travel time or distance from the customers to their assigned center has to be minimized. This paper introduces a polynomial-time constructive heuristic algorithm that exploits the relationship between this problem and the minimum capacitated dominating set problem. The proposed heuristic is based on the one-hop farthest-first heuristic that has proven effective for the uncapacitated version of the problem. We carried out different empirical evaluations of the proposed heuristics, including an analysis of the effect of a parallel implementation of the algorithm, which significantly improved the running time for relatively large instances.
The graph burning problem is an NP-hard combinatorial optimization problem that helps quantify how vulnerable a graph is to contagion. This paper introduces three mathematical formulations of the problem: an integer linear program (ILP) and two constraint satisfaction problems (CSP1 and CSP2). Thanks to off-the-shelf optimization software, these formulations can be solved optimally over arbitrary graphs; this is relevant because the only algorithms designed to date for this problem are approximation algorithms and heuristics, which do not guarantee to find optimal solutions. We empirically compared the proposed formulations using random graphs and off-the-shelf optimization software. The results show that CSP1 and CSP2 tend to reach optimal solutions in less time than the ILP. Therefore, we executed them over some benchmark graphs of order at most 5908. The previously best-known solutions for some of these graphs were improved. We draw some empirical observations from the experimental results. For instance, we find the tendency: the larger the graph’s optimal solution, the more difficult it is to find it. Finally, the resulting set of optimal solutions might be helpful as a benchmark dataset for the performance evaluation of non-exact algorithms.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.