2012
DOI: 10.1016/j.micpro.2011.05.006
|View full text |Cite
|
Sign up to set email alerts
|

Optimization strategies in different CUDA architectures using llCoMP

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2012
2012
2020
2020

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 28 publications
0
5
0
Order By: Relevance
“…Another drawback of this model is the manual management of kernel compilation at run-time, for different architectures in different contexts, that is desired to be generalized and simplified. The llCoMP tool (Reyes and de Sande, 2012) is a source-to-source compiler that translates C annotated code to MPI + OpenMP or CUDA. However, it does not support the joint use of CUDA with the other parallel models.…”
Section: Related Workmentioning
confidence: 99%
“…Another drawback of this model is the manual management of kernel compilation at run-time, for different architectures in different contexts, that is desired to be generalized and simplified. The llCoMP tool (Reyes and de Sande, 2012) is a source-to-source compiler that translates C annotated code to MPI + OpenMP or CUDA. However, it does not support the joint use of CUDA with the other parallel models.…”
Section: Related Workmentioning
confidence: 99%
“…accULL [17] is the first open source OpenACC compiler that has already implemented some major directives and runtime calls of OpenACC. It used YaCF compiler framework [16] and a standalone runtime library Frangollo that is independent of any compiler. This implementation supports both CUDA and OpenCL platforms.…”
Section: Related Workmentioning
confidence: 99%
“…The first one is the lack of computing frameworks that can easily schedule the workload in such complex environments. Some works have been presented to integrate the use of different programming languages or tools [189,190]. However, the programmer still needs to tackle different design and implementation problems related with each level of parallelism.…”
Section: Problem Description: the Need For Speed And The Lack Of An Umentioning
confidence: 99%
“…However, it does not support conversions for CUDA. Similar to the previous approach, the tool llCoMP [189] is another source-to-source compiler that translates C annotated code to MPI + OpenMP or CUDA code, that is only focused in parallel-loop problems. Additionally, this compiler does not support the joint use of CUDA with the other parallel models, leaving its suitability for heterogeneous environments limited.…”
Section: State Of the Art: Looking For One Tool To Rule All Parallel mentioning
confidence: 99%