2017 IEEE/ACM International Symposium on Code Generation and Optimization (CGO) 2017
DOI: 10.1109/cgo.2017.7863733
|View full text |Cite
|
Sign up to set email alerts
|

ThinLTO: Scalable and incremental LTO

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 27 publications
(10 citation statements)
references
References 9 publications
0
10
0
Order By: Relevance
“…Others [Porat et al 1996;Zendra et al 1997] do eliminate virtual tables entirely, however they require whole program visibility which prevents separate compilation and is not always feasible in practice. A number of recent works aim at making whole-program optimization and link-time optimization more effective [Doeraene and Schlatter 2016;Johnson et al 2017a;Sathyanathan et al 2017] or at improving call graph analyses [Johnson et al 2017b;Petrashko et al 2016;Tan et al 2017;Tip and Palsberg 2000] which can increase the scope and precision of devirtualization optimizations.…”
Section: Implementations Of Dynamic Dispatchmentioning
confidence: 99%
“…Others [Porat et al 1996;Zendra et al 1997] do eliminate virtual tables entirely, however they require whole program visibility which prevents separate compilation and is not always feasible in practice. A number of recent works aim at making whole-program optimization and link-time optimization more effective [Doeraene and Schlatter 2016;Johnson et al 2017a;Sathyanathan et al 2017] or at improving call graph analyses [Johnson et al 2017b;Petrashko et al 2016;Tan et al 2017;Tip and Palsberg 2000] which can increase the scope and precision of devirtualization optimizations.…”
Section: Implementations Of Dynamic Dispatchmentioning
confidence: 99%
“…Linktime optimization [16] can offer better visibility, however, the analysis is still conservative and may err on the side of being less exhaustive to reduce prohibitive analysis cost. Wholeprogram link-time optimizations [17], [18] have provided less than 5% average speedup, although a lot more headroom exists as we show in our work. Thus, despite their best efforts, compilers often fall short of eliminating runtime inefficiencies.…”
Section: Introductionmentioning
confidence: 70%
“…Compilation in full LTO mode is already memory hungry. Just keeping the whole program in memory can be a significant problem for large programs [10]. Maintaining additional information for every function and basic block could easily tip the compiler over the edge.…”
Section: Memory Usagementioning
confidence: 99%
“…Existing techniques range from simple passes merging identical functions at the compiler intermediate representation (IR) [2,15] or the binary level [1, 13,25] up to approaches that identify and merge similar subsequences in otherwise dissimilar functions [9,20,21]. As already noted by Chabbi et al [4], these techniques have either limited benefit on code reduction or unacceptable compilation overheads for production, especially considering builds using link-time optimizations (LTO) where inter-procedural optimizations have greater opportunities but at a greater cost [10].…”
Section: Introductionmentioning
confidence: 99%