“…The difference of our approach to that of the artificial intelligence community is that we try to maximise the number of variables (sites) with a conflict-free assignment, while their objective is to either list all assignment tuples without conflicts [MF85], to minimise the number of conflicts [FW92], or to find the maximum weighted subset of constraints which still allows an assignment.…”
Section: The Label Number Maximisation Problemmentioning
confidence: 99%
“…. , v n , each associated with a domain D i and a set of relations constraining the assignment of subsets of the variables, find all possible n-tuples of variable assignments that satisfy the relations [MF85]. Often variable domains are restricted to discrete finite sets, and only binary relations are considered.…”
Section: Frameworkmentioning
confidence: 99%
“…In the special cases of m = 1, 2, and 3, these algorithms are called node, arc, and path consistency algorithms, respectively. Mackworth and Freuder have shown that arc consistency can be achieved in O(a 3 k) where a is the size of the variable domains and k the number of binary relations [MF85].…”
Abstract. The general map labeling problem consists in labeling a set of sites (points, lines, regions) given a set of candidates (rectangles, circles, ellipses, irregularly shaped labels) for each site. A map can be a classical cartographical map, a diagram, a graph or any other figure that needs to be labeled. A labeling is either a complete set of non-conflicting candidates, one per site, or a subset of maximum cardinality. Finding such a labeling is NP-hard. We present a combinatorial framework to attack the problem in its full generality. The key idea is to separate the geometric from the combinatorial part of the problem. The latter is captured by the conflict graph of the candidates and by rules which successively simplify this graph towards a near-optimal solution. We exemplify this framework at the problem of labeling point sets with axis-parallel rectangles as candidates, four per point. We do this such that it becomes clear how our concept can be applied to other cases. We study competing algorithms and do a thorough empirical comparison. The new algorithm we suggest is fast, simple and effective.
“…The difference of our approach to that of the artificial intelligence community is that we try to maximise the number of variables (sites) with a conflict-free assignment, while their objective is to either list all assignment tuples without conflicts [MF85], to minimise the number of conflicts [FW92], or to find the maximum weighted subset of constraints which still allows an assignment.…”
Section: The Label Number Maximisation Problemmentioning
confidence: 99%
“…. , v n , each associated with a domain D i and a set of relations constraining the assignment of subsets of the variables, find all possible n-tuples of variable assignments that satisfy the relations [MF85]. Often variable domains are restricted to discrete finite sets, and only binary relations are considered.…”
Section: Frameworkmentioning
confidence: 99%
“…In the special cases of m = 1, 2, and 3, these algorithms are called node, arc, and path consistency algorithms, respectively. Mackworth and Freuder have shown that arc consistency can be achieved in O(a 3 k) where a is the size of the variable domains and k the number of binary relations [MF85].…”
Abstract. The general map labeling problem consists in labeling a set of sites (points, lines, regions) given a set of candidates (rectangles, circles, ellipses, irregularly shaped labels) for each site. A map can be a classical cartographical map, a diagram, a graph or any other figure that needs to be labeled. A labeling is either a complete set of non-conflicting candidates, one per site, or a subset of maximum cardinality. Finding such a labeling is NP-hard. We present a combinatorial framework to attack the problem in its full generality. The key idea is to separate the geometric from the combinatorial part of the problem. The latter is captured by the conflict graph of the candidates and by rules which successively simplify this graph towards a near-optimal solution. We exemplify this framework at the problem of labeling point sets with axis-parallel rectangles as candidates, four per point. We do this such that it becomes clear how our concept can be applied to other cases. We study competing algorithms and do a thorough empirical comparison. The new algorithm we suggest is fast, simple and effective.
“…[2, 6-8, 11, 19]), preprocessing techniques, and constraint algorithms (see e.g. [12,14,16,18,27]) have been designed and analyzed for this class of problems (see the reviews [13,20] for a comprehensive overview of this area). In this paper, we are mainly concerned with (network) consistency techniques, and arc consistency in particular.…”
“…This had a computational complexity of 0(ena 3 ). Mackworth and Freuder [7] introduced algorithm AC-3, which used the edges in E, rather than the variables in N, to guide the filtering. This reduced the complexity to 0(ea 3 ).…”
An optimal arc consistency algorithm AC-4 was given by Mohr and Henderson [8]. AC-4 has cost 0(ea 2 ), and cost (na 2 ) for scene labelling. Although their algorithm is indeed optimal, under certain conditions a constraint satisfaction problem can be transformed into a less complex problem. In this paper, we present conditions and mechanisms for such transformations, and show how to factor relations into more manageable components. We describe how factorization can reduce AC-4's cost to O(ea), and apply this result to RETE match. Further, with our factorization, the cost of scene labelling is reduced to O(na).CR Classification E.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.