Proceedings of the First Annual ACM Symposium on Parallel Algorithms and Architectures 1989
DOI: 10.1145/72935.72980
|View full text |Cite
|
Sign up to set email alerts
|

Constructing trees in parallel

Abstract: An O(log ~ n) time, n2/logn processor as well as an O(log n) time, n3/log n processor CREW deterministic parallel algorithms are presented for constructing Huffman codes from a given list of frequences. The time can be reduced to O(log n(loglog n) 2) on an CRCW model, using only n2/(log log n) 2 processors.Also presented is an optimal O(log n) time, O(n/log n) processor EREW parallel algorithm for constructing a tree given a list of leaf depths when the depths are monotonic. An O(log 2 n) time, n processor par… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
19
0
3

Year Published

1990
1990
2013
2013

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 47 publications
(23 citation statements)
references
References 15 publications
(20 reference statements)
0
19
0
3
Order By: Relevance
“…for each sign combination taken consistently on both sides of the inequality, and, by (6), (7), (8)) Thus, in this case P C (î,k) = 1 is equivalent to P C,lo (î,k) = 1. Note that this also implies δ(î − ,k − ) < 0, since otherwise we would have δ(î ± ,k ± ) = 0 for all four sign combinations, and hence, by symmetry, also P C,hi (î,k) = 1.…”
Section: Theoremmentioning
confidence: 97%
See 1 more Smart Citation
“…for each sign combination taken consistently on both sides of the inequality, and, by (6), (7), (8)) Thus, in this case P C (î,k) = 1 is equivalent to P C,lo (î,k) = 1. Note that this also implies δ(î − ,k − ) < 0, since otherwise we would have δ(î ± ,k ± ) = 0 for all four sign combinations, and hence, by symmetry, also P C,hi (î,k) = 1.…”
Section: Theoremmentioning
confidence: 97%
“…The fastest currently known algorithm is by Chan [15], running in time O( n 3 (log log n) 3 log 2 n ). For Monge matrices, distance multiplication can easily be performed in time O(n 2 ), using the standard row-minima searching technique of Aggarwal et al [1] to perform matrix-vector multiplication in linear time (see also [6,57]). Alternatively, an algorithm with the same quadratic running time can be obtained directly by the divide-and-conquer technique (see e.g.…”
Section: Fast Implicit Distance Multiplicationmentioning
confidence: 99%
“…In the parallel case, APSP can be solved by repeated squaring in O(lg 2 n) time using n 3 /lg n processors on a CREW PRAM. Atallah et al [15] show how to solve APSP in O(lg 2 n) time using n 3 /lg n processors on a CREW PRAM (this solution follows from their O(lg 2 n)-time (n 2 /lg n)-processor solution to the single source shortest paths problem on such a graph). In Section 4.1 we give the algorithm of Aggarwal et al [2] which runs in O(lg 2 n) time using n 2 CREW-PRAM processors for the special case of the APSP problem when the graph is acyclic and the edge weights satisfy the quadrangle inequality.…”
Section: Our Main Resultsmentioning
confidence: 99%
“…In the sequential domain, Huffman in [25] showed how to construct Huffman codes greedily in O(n) time (once the character frequencies are in sorted order). In [15], Atallah et al reduced Huffman coding to O(lg n) tube minimization problems on Monge-composite arrays, thereby obtaining parallel algorithms for Huffman coding that run in O(lg 2 n) time using n 2 /lg n processors on a CREW PRAM and in O(lg n(lg lg n) 2 ) time using n 2 /(lg lg n) 2 processors on a CRCW PRAM. Larmore and Przytycka in [30] reduce Huffman coding to the Concave Least Weight Subsequence (CLWS) problem (defined in Section 4.2) and then show how to solve CLWS, and thereby Huffman coding, in O( √ n lg n) time using n processors on a CREW PRAM.…”
Section: Our Main Resultsmentioning
confidence: 99%
See 1 more Smart Citation