2016
DOI: 10.1002/cpe.3876
|View full text |Cite
|
Sign up to set email alerts
|

A skip list for multicore

Abstract: Summary In this paper, we introduce the Rotating skip list, the fastest concurrent skip list to date. Existing concurrent data structures experience limited scalability with the growing core count for two main reasons: threads contend while accessing the same shared data, and they require off‐chip communication to synchronize. Our solution combines the rotation of a tree to maintain logarithmic complexity deterministically, a skip list structure to avoid the tree root bottleneck, and no locks to limit cache li… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
11
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 15 publications
(11 citation statements)
references
References 30 publications
(58 reference statements)
0
11
0
Order By: Relevance
“…To evaluate the skip vector, we integrated it into the Synchrobench suite [6]. Synchrobench includes a lock-free skip list based on Fraser's algorithm [5], and a number of other optimized versions, notably the "no hotspot" nonblocking skip list [1] and the "rotating" skip list [3]. It should be noted that out of all data structures tested, the skip vector alone is capable of reclaiming memory.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…To evaluate the skip vector, we integrated it into the Synchrobench suite [6]. Synchrobench includes a lock-free skip list based on Fraser's algorithm [5], and a number of other optimized versions, notably the "no hotspot" nonblocking skip list [1] and the "rotating" skip list [3]. It should be noted that out of all data structures tested, the skip vector alone is capable of reclaiming memory.…”
Section: Discussionmentioning
confidence: 99%
“…Over the years, there have been many proposals that make the skip list scale exceptionally well [1][2][3][4]. These techniques work best when nodes in the index and data layers each store a single element.…”
Section: Introductionmentioning
confidence: 99%
“…Figures 2, 3, and 4 show write-heavy (WH) results for the HC, MC, and LC contention scenarios. Read-heavy (RH) results are presented in Appendix F. In our graphs, layered map {sg,ssg} refers to using C++ std::map as local structures, respectively over regular or sparse skip graphs as shared structures; lazy layered sg is the lazy variant of layered map sg; rotating is [13], nohotspot is [10], and numask is [11] as found in Synchrobench's GitHub (mid August 2019). These last three structures are the state-of-art maps we mainly intend to compare with.…”
Section: Discussionmentioning
confidence: 99%
“…The local structures are used to "index" the dataset similarly to what is done in [11], "jumping" to positions in the shared structure where the operations take place and become visible to other threads. We implemented lazy and non-lazy variations of our technique, and, compared to previous state-of-art implementations [11,10,13], we operate at least 80% faster under high-contention settings (32% of operations being successful updates on a 2 10 -sized structure), and at least 32% faster under low-contention/low-update settings (4% of operations being successful updates on a 2 17 -sized structure) with 96 threads. Our partitioning scheme results in a 70% of reduction on the number of remote CAS operations, and a 41.4% increase in CAS success rate for 92 threads as compared to skip lists.…”
Section: Introductionmentioning
confidence: 99%
“…Although there are many implementation flavors and variants of the skip list [16,18,25,29,38], those used in data store applications (i.e., databases and key-value stores) distinctively maintain data in an append-only manner [20,27,33]. This means that the skip list never overwrites existing data: instead, all mutations are processed as insertions with a different timestamp.…”
Section: Introductionmentioning
confidence: 99%