2017
DOI: 10.14778/3149193.3149202
|View full text |Cite
|
Sign up to set email alerts
|

Interleaving with coroutines

Abstract: Index join performance is determined by the efficiency of the lookup operation on the involved index. Although database indexes are highly optimized to leverage processor caches, main memory accesses inevitably increase lookup runtime when the index outsizes the last-level cache; hence, index join performance drops. Still, robust index join performance becomes possible with instruction stream interleaving: given a group of lookups, we can hide cache misses in one lookup with instructions from other lookups by … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
10
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 29 publications
(10 citation statements)
references
References 22 publications
0
10
0
Order By: Relevance
“…Traditionally, interleaving has implied extensive code rewrites with techniques like group prefetching [11] and asynchronous memory access chaining [24], and has thus been avoided in production environments in favor of maintainability. Recent proposals [21,23,31,32] avoid the prohibitive code rewrites by encoding the independent tasks as coroutines, i.e., functions that suspend their execution at specified points and later resume from where they left off. Listing 1 hints the changes required to enable interleaved execution through an example depicting a binary search implemented as a C++20 coroutine [6].…”
Section: Interleaved Execution and Coroutinesmentioning
confidence: 99%
See 4 more Smart Citations
“…Traditionally, interleaving has implied extensive code rewrites with techniques like group prefetching [11] and asynchronous memory access chaining [24], and has thus been avoided in production environments in favor of maintainability. Recent proposals [21,23,31,32] avoid the prohibitive code rewrites by encoding the independent tasks as coroutines, i.e., functions that suspend their execution at specified points and later resume from where they left off. Listing 1 hints the changes required to enable interleaved execution through an example depicting a binary search implemented as a C++20 coroutine [6].…”
Section: Interleaved Execution and Coroutinesmentioning
confidence: 99%
“…Note these suspensions are an essential part of converting call chain nodes from ordinary functions to coroutines, but have no direct connection to latency hiding. When the task<int> of binary_search suspends on a cache miss, execution control returns to for_each; for_each has a round-robin coroutine scheduler [31] managing a group of G coroutines running interleaved, where G is large enough to hide the memory latency [32]. After the evaluation of the co_await expression, the returned position is used to check if array[position] equals to value, in which case position is added to positions.…”
Section: Interleaved Execution and Coroutinesmentioning
confidence: 99%
See 3 more Smart Citations