Abstract:Guided program synthesis is an existing methodology for systematic development of algorithms. Speci c algorithms are viewed as instances of very general algorithm schemas.For example, the Global Search schema generalizes traditional branch-and-bound search, and includes both depth-rst and breadth-rst strategies. Algorithm development involves systematic specialization of the algorithm schema based on problem-speci c constraints to create e cient algorithms that are correct by construction, obviating the need f… Show more
“…Our goal is to find a path that ends in a primitive-only implementation that performs well (ie the path is low-cost). The work of [22,23] shows how subspaces of decisions / paths can be ignored, or pruned, to limit consideration. In DxTer, simplifiers are optimizations that are always applied because they reduce an implementation's cost and it is never worth exploring implementations that do not have the transformation applied because they always perform worse [17].…”
How do experts navigate the huge space of implementations for a given specification to find an efficient choice with minimal searching? Answer: They use "heuristics" -rules of thumb that are more street wisdom than scientific fact. We provide a scientific justification for Dense Linear Algebra (DLA) heuristics by showing that only a few decisions (out of many possible) are critical to performance; once these decisions are made, the die is cast and only relatively minor performance improvements are possible. The (implementation × performance) space of DLA is stair-stepped. Each stair is a set of implementations with very similar performance and (surprisingly) share key design decision(s). High-performance stairs align with heuristics that prescribe certain decisions in a particular context. Stairs also tell us how to tailor the search engine of a DLA code generator to reduce the time it needs to find implementations that are as good or better than those crafted by experts.
“…Our goal is to find a path that ends in a primitive-only implementation that performs well (ie the path is low-cost). The work of [22,23] shows how subspaces of decisions / paths can be ignored, or pruned, to limit consideration. In DxTer, simplifiers are optimizations that are always applied because they reduce an implementation's cost and it is never worth exploring implementations that do not have the transformation applied because they always perform worse [17].…”
How do experts navigate the huge space of implementations for a given specification to find an efficient choice with minimal searching? Answer: They use "heuristics" -rules of thumb that are more street wisdom than scientific fact. We provide a scientific justification for Dense Linear Algebra (DLA) heuristics by showing that only a few decisions (out of many possible) are critical to performance; once these decisions are made, the die is cast and only relatively minor performance improvements are possible. The (implementation × performance) space of DLA is stair-stepped. Each stair is a set of implementations with very similar performance and (surprisingly) share key design decision(s). High-performance stairs align with heuristics that prescribe certain decisions in a particular context. Stairs also tell us how to tailor the search engine of a DLA code generator to reduce the time it needs to find implementations that are as good or better than those crafted by experts.
“…Pick an algorithm class from a library of algorithm classes (GLOBAL SEARCH, LOCAL SEARCH, DIVIDE AND CONQUER, FIXPOINT ITERATION, etc). An algorithm class comprises a program schema containing operators to be instantiated and an axiomatic theory of those operators (see [9] for details). A schema is analogous to a template in Java/C++ , with the difference that both the template and template arguments are formally constrained.…”
Section: Processmentioning
confidence: 99%
“…That is, y y ′ is a sufficient condition for ensuring that if y ′ can be extended into a feasible solution than so can y with the same extension. If c is compositional (that is, c(s ⊕ t) = c(s) + c(t)) then it can be shown [9] that if y y ′ and y is cheaper than y ′ , then y dominates y ′ (written y ⊲ y ′ ). Formally:…”
Section: Dominance Relationsmentioning
confidence: 99%
“…If undom l = / 0 then the singleton member y * of undom l is called the greedy choice. In other work [12] we show how to derive greedy algorithms for a variety of problems including Activity Selection, One machine scheduling, Professor Midas' Traveling Problem, Binary Search.…”
Section: A Class Of Strictly Greedy Algorithms (Sg)mentioning
Although Breadth-First Search (BFS) has several advantages over Depth-First Search (DFS) its prohibitive space requirements have meant that algorithm designers often pass it over in favor of DFS. To address this shortcoming, we introduce a theory of Efficient BFS (EBFS) along with a simple recursive program schema for carrying out the search. The theory is based on dominance relations, a long standing technique from the field of search algorithms. We show how the theory can be used to systematically derive solutions to two graph algorithms, namely the Single Source Shortest Path problem and the Minimum Spanning Tree problem. The solutions are found by making small systematic changes to the derivation, revealing the connections between the two problems which are often obscured in textbook presentations of them
Guided program synthesis is an existing methodology for systematic development of algorithms. Speci c algorithms are viewed as instances of very general algorithm schemas.For example, the Global Search schema generalizes traditional branch-and-bound search, and includes both depth-rst and breadth-rst strategies. Algorithm development involves systematic specialization of the algorithm schema based on problem-speci c constraints to create e cient algorithms that are correct by construction, obviating the need for a separate veri cation step. Guided program synthesis has been applied to a wide range of algorithms, but there is still no systematic process for the synthesis of large search programs such as AI planners.v Our rst contribution is the specialization of Global Search to a class we call E cient Breadth-First Search (EBFS), by incorporating dominance relations to constrain the size of the frontier of the search to be polynomially bounded. Dominance relations allow two search spaces to be compared to determine whether one dominates the other, thus allowing the dominated space to be eliminated from the search. We further show that EBFS is an e ective characterization of greedy algorithms, when the breadth bound is set to one. Surprisingly, the resulting characterization is more general than the well-known characterization of greedy algorithms, namely the Greedy Algorithm parametrized over algebraic structures called greedoids.Our second contribution is a methodology for systematically deriving dominance relations, not just for individual problems but for families of related problems. The techniques are illustrated on numerous well-known problems. Combining this with the program schema for EBFS results in e cient greedy algorithms.Our third contribution is application of the theory and methodology to the practical problem of synthesizing fast planners. Nearly all the state-of-the-art planners in the planning literature are heuristic domain-independent planners. They generally do not scale well and their space requirements also become quite prohibitive. Planners such as TLPlan that incorporate domain-speci c information in the form of control rules are orders of magnitude faster. However, devising the control rules is labor-intensive task and requires domain expertise and insight. The correctness of the rules is also not guaranteed. We introduce a method by which domain-speci c dominance relations can be systematically derived, which can then be turned into control rules, and demonstrate the method on a planning problem (Logistics).vi Many of the derivations are straightforward enough to be automatable.Our third main contribution is showing how to apply the theory and techniqueswe have introduced to a practical problem, namely synthesizing fast AI planners [GNT04].Many of the state-of-the-art planners in the planning literature are domain-independent heuristic planners. They generally do not scale very well and their space requirements also become quite prohibitive. The key to scalable planners is to incorpora...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.