11th International Conference on Parallel and Distributed Systems (ICPADS'05)
DOI: 10.1109/icpads.2005.40
|View full text |Cite
|
Sign up to set email alerts
|

A Parallel Implementation of 2-D/3-D Image Registration for Computer-Assisted Surgery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
7
0

Publication Types

Select...
2
2

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(7 citation statements)
references
References 28 publications
0
7
0
Order By: Relevance
“…Threads have local memory that can be readily implemented on distributed memory platforms such as clusters. Using MPI, we distributed the gradient computation equally across nodes in a small cluster similar to [3]. This distribution is possible by virtue of the fact that each finite difference calculation for each control point is independent and requires only the neighboring voxels and control points to calculate.…”
Section: Exploiting Multiple Forms Of Parallelismmentioning
confidence: 99%
See 1 more Smart Citation
“…Threads have local memory that can be readily implemented on distributed memory platforms such as clusters. Using MPI, we distributed the gradient computation equally across nodes in a small cluster similar to [3]. This distribution is possible by virtue of the fact that each finite difference calculation for each control point is independent and requires only the neighboring voxels and control points to calculate.…”
Section: Exploiting Multiple Forms Of Parallelismmentioning
confidence: 99%
“…Optimization level parallelism represents those parts of an algorithm that can run in parallel given the basic unit is an iteration of the image registration routine such as gradient decent invocations from different starting points [3] or with a distributed genetic algorithm [8]. Volume level parallelism is a generalization of optimization level parallelism where the computational units operate on entire volumes like computing a multidimensional gradient on whole image volumes [3].…”
Section: Related Workmentioning
confidence: 99%
“…Ino et al [9] use this idea (which they call "speculative parallelism") to promote faster convergence in their time-critical registration application. Since the best optimization parameters are difficult to identify a priori, multiple instances of the same algorithm are launched with different parameters.…”
Section: Optimization-level Parallelismmentioning
confidence: 99%
“…For example, an optimization iteration could be pipelined (applying one trial transform to the moving image while generating another candidate transform). Ino et al [9] discuss the potential of "task parallelism" in accelerating the gradient computation of a rigid registration algorithm. This is possible, since independent finite difference calculations are done using the entire volume.…”
Section: Volume-level Parallelismmentioning
confidence: 99%
See 1 more Smart Citation