1996
DOI: 10.1007/3540617795_33
|View full text |Cite
|
Sign up to set email alerts
|

Computer-assisted generation of PVM/C++ programs using CAP

Abstract: Parallelizing an algorithm consists of dividing the computation into a set of sequential operations, assigning the operations to threads, synchronizing the execution of threads, specifying the data transfer requirements between threads and mapping the threads onto processors. With current software technology, writing a parallel program executing the parallelized algorithm involves mixing sequential code with calls to a communication library such as PVM, both for communication and synchronization. This contribu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

1997
1997
1999
1999

Publication Types

Select...
3
1
1

Relationship

3
2

Authors

Journals

citations
Cited by 6 publications
(6 citation statements)
references
References 5 publications
0
6
0
Order By: Relevance
“…This specification is translated automatically into a C++ source program. At program startup time, the CAP runtime allocates the program threads to the available processors, using the information stored in a configuration file [ 1]. The macro data flow model which underlies the CAP approach has also been used successfully by the creators of the MENTAT parallel programming language [ Thanks to the automatic compilation of the parallel application, the application programmer does not need to explicitly program the protocols to exchange data between parallel processes and to ensure their synchronization.…”
Section: The Computer-aided Parallelization Frameworkmentioning
confidence: 99%
See 1 more Smart Citation
“…This specification is translated automatically into a C++ source program. At program startup time, the CAP runtime allocates the program threads to the available processors, using the information stored in a configuration file [ 1]. The macro data flow model which underlies the CAP approach has also been used successfully by the creators of the MENTAT parallel programming language [ Thanks to the automatic compilation of the parallel application, the application programmer does not need to explicitly program the protocols to exchange data between parallel processes and to ensure their synchronization.…”
Section: The Computer-aided Parallelization Frameworkmentioning
confidence: 99%
“…The time required to merge a tile into the visualization window is tm _ tm TileSize2 . The duration of the process-and-gather operation is (Equation 1): T = td+1.ltp+Ptn+tm (1) The assumptions behind the execution schedule of Figure 5 are that tile accesses are faster than tile processing steps (td < D . t., ), where D is the number of disks per S/P node), that P network transfer times are faster than a single tile processing step (P -t, < t., ), and that merging a tile into a window is faster than a network transfer step ( tm < tn)…”
Section: Theoretical Performance Analysismentioning
confidence: 99%
“…The CAP specification of a parallel program is described in a simple formal language, an extension of C++. This specification is translated into a C++ source program, which, after compilation, runs on multiple processors according to a configuration map specifying the mapping of the threads running the operations onto the set of available processors 3 . The macro data flow model which underlies the CAP approach has also been successfully used by the creators of the MENTAT parallel programming language [5].…”
Section: Computer-aided Parallelization: the Conceptmentioning
confidence: 99%
“…This paper contribution is to show that dynamic-pipelined algorithms can be compactly specified in CAP and achieve good performance. CAP (Computer-Aided Parallelization [3]) is a C++ language extension which supports the specification of pipelined concurrent programs. CAP's framework is based on decomposing high-level operations such as 2-D and 3-D image reconstruction, optimization problems or mathematical computations into a set of sequential suboperations with data dependencies.…”
Section: Introductionmentioning
confidence: 99%