Ninth International Workshop on High-Level Parallel Programming Models and Supportive Environments, 2004. Proceedings.
DOI: 10.1109/hips.2004.1299190
|View full text |Cite
|
Sign up to set email alerts
|

The cascade high productivity language

Abstract: The strong focus of recent High End Computing efforts on performance has resulted in a low-level parallel programming paradigm characterized by explicit control over message-passing in the framework of a fragmented programming model. In such a model, object code performance is achieved at the expense of productivity, conciseness, and clarity.This paper describes the design of Chapel, the Cascade High Productivity Language, which is being developed in the DARPA-funded HPCS project Cascade led by Cray Inc. Chape… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
69
0

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 91 publications
(69 citation statements)
references
References 20 publications
(17 reference statements)
0
69
0
Order By: Relevance
“…Table II presents such measurements, corroborating our statement that the proposed model is simple to use. This affirmation is even more substantiated with the facts that: 1) most array manipulation methods can resort to the default distribution; 2) 2 We omit it from graph because the enlargement of the scale affects the readability of the remaining data. Both these approaches provide a low-level of abstraction that requires the explicit management of the execution flows and of their interaction, which is error-prone and results in hard to maintain code.…”
Section: Productivity Analysismentioning
confidence: 85%
See 2 more Smart Citations
“…Table II presents such measurements, corroborating our statement that the proposed model is simple to use. This affirmation is even more substantiated with the facts that: 1) most array manipulation methods can resort to the default distribution; 2) 2 We omit it from graph because the enlargement of the scale affects the readability of the remaining data. Both these approaches provide a low-level of abstraction that requires the explicit management of the execution flows and of their interaction, which is error-prone and results in hard to maintain code.…”
Section: Productivity Analysismentioning
confidence: 85%
“…As all array programming languages, PGAS languages are restricted to the SPMD paradigm, which is not flexible nor supports nested parallelism. Languages X10 [1] and Chapel [2] overcome this limitation by extending PGAS with processing unit abstractions (localities) to which code may be shipped and executed. Language constructs allow a) the asynchronous spawning of tasks in localities, b) the distribution of arrays across localities and the subsequent spawning of activities to operate over the distributed data, and c) the management of the results of such asynchronous activities.…”
Section: Productivity Analysismentioning
confidence: 99%
See 1 more Smart Citation
“…Stream processing languages [Mattson 2002;Buck et al 2004] also build upon a two-tiered memory model [Labonte et al 2004], choosing to differentiate between on and offchip storage. Modern parallel language efforts [Charles et al 2005;Callahan et al 2004;Allen et al 2005] support locality cognizant programming through the concept of distributions (from ZPL [Deitz et al 2004]). A distribution is a map of array data to a set of machine locations, facilitating a single program namespace despite execution on nodes with physically distinct address spaces.…”
Section: Related Workmentioning
confidence: 99%
“…Mechanisms provided to express memory locality in existing parallel languages, such as the designation of local and global arrays in UPC [Carlson et al 1999], Co-Array Fortran [Numrich and Reid 1998], and Titanium [Yelick et al 1998], and distributions over locales as in ZPL [Deitz et al 2004], Chapel [Callahan et al 2004], and X10 [Charles et al 2005], do not solve the problem of memory management on exposed-communication architectures. These existing approaches describe the distribution and horizontal communication of data among nodes of a parallel machine.…”
Section: Introductionmentioning
confidence: 99%