The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2009 18th International Conference on Parallel Architectures and Compilation Techniques 2009
DOI: 10.1109/pact.2009.18
|View full text |Cite
|
Sign up to set email alerts
|

Polyhedral-Model Guided Loop-Nest Auto-Vectorization

Abstract: Optimizing compilers apply numerous interdependent optimizations, leading to the notoriously difficult phase-ordering problem-that of deciding which transformations to apply and in which order. Fortunately, new infrastructures such as the polyhedral compilation framework host a variety of transformations, facilitating the efficient exploration and configuration of multiple transformation sequences. Many powerful optimizations, however, remain external to the polyhedral framework, including vectorization. The l… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
44
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
4
3
1

Relationship

1
7

Authors

Journals

citations
Cited by 92 publications
(44 citation statements)
references
References 17 publications
0
44
0
Order By: Relevance
“…In this work we automatically apply post-transformations to expose parallelism at the innermost loop level, if possible. Previous work on SIMD vectorization for affine programs has proposed effective solutions to expose innerloop-level parallelism [12,37], and we seamlessly reuse those techniques to enable effective loop pipelining on the FPGA. This is achieved by using additional constraints during and after the tiling hyperplanes computation, to preserve one level of inner parallelism.…”
Section: Loop Pipelining and Task Parallelismmentioning
confidence: 99%
“…In this work we automatically apply post-transformations to expose parallelism at the innermost loop level, if possible. Previous work on SIMD vectorization for affine programs has proposed effective solutions to expose innerloop-level parallelism [12,37], and we seamlessly reuse those techniques to enable effective loop pipelining on the FPGA. This is achieved by using additional constraints during and after the tiling hyperplanes computation, to preserve one level of inner parallelism.…”
Section: Loop Pipelining and Task Parallelismmentioning
confidence: 99%
“…Several previous studies have shown how tiling, parallelization, vectorization or data locality enhancement can be efficiently addressed in an affine transformation framework [21], [34], [14], [24], [36]. Any loop transformation can be represented in the polyhedral representation, and composing arbitrarily complex sequences of loop transformations is seamlessly handled by the framework.…”
Section: Optimization Spacementioning
confidence: 99%
“…Our approach to vectorization leverages recent analytical modeling results by Trifunovic et al [36]. We take advantage of the polyhedral representation to restructure imperfectly nested programs, to expose vectorizable inner loops.…”
Section: ) Simd-level Parallelizationmentioning
confidence: 99%
See 1 more Smart Citation
“…[15,26,28,30,43]. These work are usually focusing on the back-end part, that is the actual SIMD code generation from a parallel loop [15,28,30], or on the highlevel loop transformation angle only [12,26,38,40]. To the best of our knowledge, our work is the first to address simultaneously both problems by setting a well-defined interface between a powerful polyhedral high-level transformation engine and a specialized SIMD code generator.…”
Section: Related Workmentioning
confidence: 99%