2014 22nd Euromicro International Conference on Parallel, Distributed, and Network-Based Processing 2014
DOI: 10.1109/pdp.2014.13
|View full text |Cite
|
Sign up to set email alerts
|

Loop Parallelism: A New Skeleton Perspective on Data Parallel Patterns

Abstract: Abstract-Traditionally, skeleton based parallel programming frameworks support data parallelism by providing the programmer with a comprehensive set of data parallel skeletons, based on different variants of map and reduce patterns. On the other side, more conventional parallel programming frameworks provide application programmers with the possibility to introduce parallelism in the execution of loops with a relatively small programming effort. In this work, we discuss a "ParallelFor" skeleton provided within… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
14
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
4
3
1

Relationship

2
6

Authors

Journals

citations
Cited by 17 publications
(15 citation statements)
references
References 10 publications
1
14
0
Order By: Relevance
“…However, it enables the possibility to define custom scheduling policies, improving the flexibility of the approach. Moreover, the scheduler implementation we used has been proven efficient and feasible in similar contexts for state-of-the-art multicore architectures up to 32/64 cores using a variety of different applications [12]. This is also confirmed by this work (Section 5.2, application characterised by very finegrained computation).…”
Section: Methodssupporting
confidence: 63%
“…However, it enables the possibility to define custom scheduling policies, improving the flexibility of the approach. Moreover, the scheduler implementation we used has been proven efficient and feasible in similar contexts for state-of-the-art multicore architectures up to 32/64 cores using a variety of different applications [12]. This is also confirmed by this work (Section 5.2, application characterised by very finegrained computation).…”
Section: Methodssupporting
confidence: 63%
“…It provides developers with a set of high-level, parallel programming patterns (aka algorithmic skeletons), obtained by the composition of two basic algorithmic skeletons: a farm skeleton, and a pipeline skeleton. Leveraging the farm skeleton, FastFlow exposes a ParallelFor pattern [16], where farm workers are sequential wrappers that execute chunks of a loop iterations having the form for(idx=start;idx<stop;idx+=step). Just like TBB, FastFlow's ParallelFor pattern uses C++11 lambda functions as a concise and elegant way to create a function object: lambdas can "capture" the state of non-local variables by value or by reference and allow functions to be syntactically defined when needed.…”
Section: Fastflowmentioning
confidence: 99%
“…On the contrary, the termination phase is always computed sequentially in the current implementation. Both the map and map-reduce phases have been implemented using the ParallelForReduce high-level pattern [8] already available in the FastFlow framework. The ParallelForReduce pattern allows efficient parallelisation of parallel loops and parallel loops with reduction variables.…”
Section: Skeleton Implementationmentioning
confidence: 99%