2004
DOI: 10.1007/978-3-540-24767-8_20
|View full text |Cite
|
Sign up to set email alerts
|

Speculative Parallelization of a Randomized Incremental Convex Hull Algorithm

Abstract: Abstract. Finding the fastest algorithm to solve a problem is one of the main issues in Computational Geometry. Focusing only on worst case analysis or asymptotic computations leads to the development of complex data structures or hard to implement algorithms. Randomized algorithms appear in this scenario as a very useful tool in order to obtain easier implementations within a good expected time bound. However, parallel implementations of these algorithms are hard to develop and require an in-depth understandi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2005
2005
2014
2014

Publication Types

Select...
3
2

Relationship

2
3

Authors

Journals

citations
Cited by 8 publications
(8 citation statements)
references
References 18 publications
0
8
0
Order By: Relevance
“…Smaller input sets were not considered, since their sequential execution time took only a few seconds in the systems under test. On the other hand, the use of bigger input sets leads to similar results in terms of speedup [57], so we do not consider them either. The sets of points have been generated using the random points generator in CGAL 2.4 [58] and have been randomly ordered using its shuffle function.…”
Section: Resultsmentioning
confidence: 72%
See 1 more Smart Citation
“…Smaller input sets were not considered, since their sequential execution time took only a few seconds in the systems under test. On the other hand, the use of bigger input sets leads to similar results in terms of speedup [57], so we do not consider them either. The sets of points have been generated using the random points generator in CGAL 2.4 [58] and have been randomly ordered using its shuffle function.…”
Section: Resultsmentioning
confidence: 72%
“…In order to compare the performance of the speculative version against the sequential algorithm, we have implemented a Fortran version of Clarkson et alÕs algorithm, augmenting the sequential code manually for speculative parallelization [57]. This task could be performed automatically by a state-of-the-art compiler.…”
Section: Speculative Parallelizationmentioning
confidence: 99%
“…Fortunately, an incorrect choice for this value will not affect the obtained speedup significantly since it is only used when the number of expected dependences is considered low enough. An incorrect choice is much more serious if we use a fixed chunk size for the execution of the entire loop, as shown in [31].…”
Section: Choosing the Maximum Chunk Sizementioning
confidence: 99%
“…The version of MESETA that will be considered is the one that uses GSS for both the increasing and decreasing part of the loop execution: As we saw above, this function leads to slightly better speedups than the other alternatives. The optimum block size for the stable part of the loop has been experimentally obtained [3], turning out to be around 2 500 for the disc and 5 000 for the square, independently of the number of processors. Figure 6 shows the relative speedup of both approaches in the execution of the Convex Hull for a 40-million-points input set, both disc-and square-shaped.…”
Section: Performance Evaluation Of Mesetamentioning
confidence: 98%
“…Fixed-Size Chunking will be used with the chunk size that leads to the maximum speedup for this particular problem [3]: 1 024 iterations for the disc and 4 096 for the square. GSS will be used with x = 1.…”
Section: Performance Evaluation Of Mesetamentioning
confidence: 99%