Abstract. In this survey the following model is considered. We assume that an instance I of a computationally hard optimization problem has been solved and that we know the optimum solution of such instance. Then a new instance I ′ is proposed, obtained by means of a slight perturbation of instance I. How can we exploit the knowledge we have on the solution of instance I to compute a (approximate) solution of instance I ′ in an efficient way? This computation model is called reoptimization and is of practical interest in various circumstances. In this article we first discuss what kind of performance we can expect for specific classes of problems and then we present some classical optimization problems (i.e. Max Knapsack, Min Steiner Tree, Scheduling) in which this approach has been fruitfully applied. Subsequently, we address vehicle routing problems and we show how the reoptimization approach can be used to obtain good approximate solution in an efficient way for some of these problems.
Abstract-Systems in many safety-critical application domains are subject to certification requirements. For any given system, however, it may be the case that only a subset of its functionality is safety-critical and hence subject to certification; the rest of the functionality is non safety critical and does not need to be certified, or is certified to a lower level of assurance. An algorithm called EDF-VD (for Earliest Deadline First with Virtual Deadlines) is described for the scheduling of such mixed-criticality task systems. Analyses of EDF-VD significantly superior to previously-known ones are presented, based on metrics such as processor speedup factor (EDF-VD is proved to be optimal with respect to this metric) and utilization bounds.
We present two space bounded random sampling algorithms that compute an approximation of the number of triangles in an undirected graph given as a stream of edges. Our first algorithm does not make any assumptions on the order of edges in the stream. It uses space that is inversely related to the ratio between the number of triangles and the number of triples with at least one edge in the induced subgraph, and constant expected update time per edge. Our second algorithm is designed for incidence streams (all edges incident to the same vertex appear consecutively). It uses space that is inversely related to the ratio between the number of triangles and length 2 paths in the graph and expected update time O(log |V | · (1 + s · |V |/|E|)), where s is the space requirement of the algorithm. These results significantly improve over previous work [20,8]. Since the space complexity depends only on the structure of the input graph and not on the number of nodes, our algorithms scale very well with increasing graph size and so they provide a basic tool to analyze the structure of large graphs. They have many applications, for example, in the discovery of Web communities, the computa- * This work was partially supported by the EU within the 6th Framework Programme under contract 001907 "Dynamically Evolving, Large Scale Information Systems" (DELIS) * Part of this work was done while the author was post-doc at Universitá degli Studi di Roma "La Sapienza" † Part of this work was done while the author was visiting the School of Computer Science at Carnegie Mellon UniversityPermission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. tion of clustering and transitivity coefficient, and discovery of frequent patterns in large graphs.We have implemented both algorithms and evaluated their performance on networks from different application domains. The sizes of the considered graphs varied from about 8, 000 nodes and 40, 000 edges to 135 million nodes and more than 1 billion edges. For both algorithms we run experiments with parameter s = 1, 000, 10, 000, 100, 000, 1, 000, 000 to evaluate running time and approximation guarantee. Both algorithms appear to be time efficient for these sample sizes. The approximation quality of the first algorithm was varying significantly and even for s = 1, 000, 000 we had more than 10% deviation for more than half of the instances. The second algorithm performed much better and even for s = 10, 000 we had an average deviation of less than 6% (taken over all but the largest instance for which we could not compute the number of triangles exactly).
DOI to the publisher's website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal. If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the "Taverne" license above, please follow below link for the End User Agreement:
Abstract. Many safety-critical embedded systems are subject to certification requirements; some systems may be required to meet multiple sets of certification requirements, from different certification authorities. Certification requirements in such "mixed-criticality" systems give rise to interesting scheduling problems, that cannot be satisfactorily addressed using techniques from conventional scheduling theory. In this paper, we study a formal model for representing such mixed-criticality workloads. We demonstrate first the intractability of determining whether a system specified in this model can be scheduled to meet all its certification requirements, even for systems subject to two sets of certification requirements. Then we quantify, via the metric of processor speedup factor, the effectiveness of two techniques, reservation-based scheduling and priority-based scheduling, that are widely used in scheduling such mixedcriticality systems, showing that the latter of the two is superior to the former. We also show that the speedup factors are tight for these two techniques.
Abstract. Many safety-critical embedded systems are subject to certification requirements; some systems may be required to meet multiple sets of certification requirements, from different certification authorities. Certification requirements in such "mixed-criticality" systems give rise to interesting scheduling problems, that cannot be satisfactorily addressed using techniques from conventional scheduling theory. In this paper, we study a formal model for representing such mixed-criticality workloads. We demonstrate first the intractability of determining whether a system specified in this model can be scheduled to meet all its certification requirements, even for systems subject to two sets of certification requirements. Then we quantify, via the metric of processor speedup factor, the effectiveness of two techniques, reservation-based scheduling and priority-based scheduling, that are widely used in scheduling such mixedcriticality systems, showing that the latter of the two is superior to the former. We also show that the speedup factors are tight for these two techniques.
Different task models have been proposed to represent the parallel structure of real-time tasks executing on manycore platforms: fork/join, synchronous parallel, DAG-based, etc. Despite different schedulability tests and resource augmentation bounds are available for these task systems, we experience difficulties in applying such results to real application scenarios, where the execution flow of parallel tasks is characterized by multiple (and nested) conditional structures. When a conditional branch drives the number and size of sub-jobs to spawn, it is hard to decide which execution path to select for modeling the worst-case scenario. To circumvent this problem, we integrate control flow information in the task model, considering conditional parallel tasks (cp-tasks) represented by DAGs composed of both precedence and conditional edges. For this task model, we identify meaningful parameters that characterize the schedulability of the system, and derive efficient algorithms to compute them. A response time analysis based on these parameters is then presented for different scheduling policies. A set of simulations shows that the proposed approach allows efficiently checking the schedulability of the addressed systems, and that it significantly tightens the schedulability analysis of non-conditional (e.g., Classic DAG) tasks over existing approaches
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.