22nd Annual Symposium on Foundations of Computer Science (Sfcs 1981) 1981
DOI: 10.1109/sfcs.1981.11
|View full text |Cite
|
Sign up to set email alerts
|

Applying parallel computation algorithms in the design of serial algorithms

Abstract: Abstract. The goal of this paper is to point out that analyses of parallelism in computational problems have practical implications even when multiprocessor machines are not available. This is true because, in many cases, a good parallel algorithm for one problem may turn out to be useful for designing an efficient serial algorithm for another problem. A d~e d framework for cases like this is presented. Particular cases, which are discussed in this paper, provide motivation for examining parallelism in sorting… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
263
0
1

Year Published

1989
1989
2022
2022

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 135 publications
(264 citation statements)
references
References 16 publications
0
263
0
1
Order By: Relevance
“…• Our solution to the GNN problem uses range searching data structures, but does not rely on Megiddo's parametric search [18] that usually causes a logarithmic overhead. Parametric search is a powerful but complex technique to reduce optimization problems to decision problems which we substitute by an application of ε-nets (see Section 2).…”
Section: Our Contributionmentioning
confidence: 99%
“…• Our solution to the GNN problem uses range searching data structures, but does not rely on Megiddo's parametric search [18] that usually causes a logarithmic overhead. Parametric search is a powerful but complex technique to reduce optimization problems to decision problems which we substitute by an application of ε-nets (see Section 2).…”
Section: Our Contributionmentioning
confidence: 99%
“…Although when g(R) = |R| our algorithm in Section 2.2 can be used to solve that problem, simply applying the parametric search technique may just lead to a weakly polynomial time algorithm (i.e., the running time is polynomial with respect to both the size of the input voxel grid Γ and the values of the on-surface and in-region costs of the voxels in Γ). Megiddo's approach 18 may result in an improved strongly polynomial time algorithm (i.e., the running time is polynomial in the size of the input voxel grid and is independent on the values of the voxel costs). However, it requires the design of an efficient parallel algorithm for the maximum flow problem, which is hard to achieve.…”
Section: Introductionmentioning
confidence: 99%
“…However, this takes time proportional to the depth of the tree, which may be too large. In an improvement to this method, alternately called centroid decomposition or tree contraction [64,47,52], one alternates this removal of leaves with the removal of vertices having only one child (sometimes with the further restriction that the parent of the vertex also only has one child). Cole and Vishkin [15] and Gazit et al [25] give centroid decomposition algorithms that take logarithmic time with a linear number of EREW operations, matching the bounds for Euler tours.…”
mentioning
confidence: 99%