Proceedings of the 46th Annual Design Automation Conference 2009
DOI: 10.1145/1629911.1630048
|View full text |Cite
|
Sign up to set email alerts
|

The Cilk++ concurrency platform

Abstract: The availability of multicore processors across a wide range of computing platforms has created a strong demand for software frameworks that can harness these resources. This paper overviews the Cilk++ programming environment, which incorporates a compiler, a runtime system, and a race-detection tool. The Cilk++ runtime system guarantees to load-balance computations effectively. To cope with legacy codes containing global variables, Cilk++ provides a "hyperobject" library which allows races on nonlocal variabl… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
105
0

Year Published

2011
2011
2015
2015

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 153 publications
(108 citation statements)
references
References 21 publications
0
105
0
Order By: Relevance
“…Cilk and its later iterations [1,18] are languages whose underlying execution model is more fine grain than classical shared-memory models. However, they do not implement dataflow features to express computations in terms of data and/or event dependencies.…”
Section: Related Workmentioning
confidence: 99%
“…Cilk and its later iterations [1,18] are languages whose underlying execution model is more fine grain than classical shared-memory models. However, they do not implement dataflow features to express computations in terms of data and/or event dependencies.…”
Section: Related Workmentioning
confidence: 99%
“…OpenMP [33], Intel TBB [4], Cilk[++] [22,61] or even languages like Erlang [94], task scheduling has begun to move outside that realm with the rise of frameworks for cluster and cloud computing.…”
Section: Task Schedulingmentioning
confidence: 99%
“…The difficulty in writing multithreaded parallel programs has lead to the development of higher-level abstractions for parallel programming, such as task-parallel languages [6,21,13,3], domainspecific parallel languages, pattern libraries that hide lowlevel threads from the programmer [33] and task-parallel programming models [7,31,2,42,42]. Compared to the traditional thread programming, task-parallel programs are easier to write, more portable, and scale better, because the parallelism is not hard-wired into the program, but created at runtime, as needed.…”
Section: Introductionmentioning
confidence: 99%