Proceedings of the 27th ACM Symposium on Parallelism in Algorithms and Architectures 2015
DOI: 10.1145/2755573.2755598
|View full text |Cite
|
Sign up to set email alerts
|

Transactional Acceleration of Concurrent Data Structures

Abstract: Concurrent data structures are a fundamental building block for scalable multi-threaded programs. While Transactional Memory (TM) was originally conceived as a mechanism for simplifying the creation of concurrent data structures, modern hardware TM systems lack the progress properties needed to completely obviate traditional techniques for designing concurrent data structures, especially those requiring nonblocking progress guarantees.In this paper, we introduce the Prefix Transaction Optimization (PTO) techni… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
10
0

Year Published

2017
2017
2018
2018

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(10 citation statements)
references
References 50 publications
0
10
0
Order By: Relevance
“…To keep our usage examples of power transactions concrete, we focus on the use of locks as the alternative path, effectively enhancing the standard TLE technique [14,37]. We note, though, that power mode is equally helpful in reducing the use of any fallback path, such as the one implemented using software transactional memory (STM) [9,28], lock-free techniques [26], and so on.…”
Section: Related Workmentioning
confidence: 99%
“…To keep our usage examples of power transactions concrete, we focus on the use of locks as the alternative path, effectively enhancing the standard TLE technique [14,37]. We note, though, that power mode is equally helpful in reducing the use of any fallback path, such as the one implemented using software transactional memory (STM) [9,28], lock-free techniques [26], and so on.…”
Section: Related Workmentioning
confidence: 99%
“…First, the commit step in SCX 2 is eliminated. Whereas in SCX 2 , we stored a pointer to the SCX-record in r .info for each r ∈ V at line 11, we store a tagged pointer to the SCX-record at 3 Let infoF ields be a pointer to a table in p's private memory containing, 4 for each r in V , the value of r .info read by p's last LLX(r ) 5 Let old be the value for f ld returned by p's last LLX(r ) 6 Begin hardware transaction Let infoF ields be a pointer to a table in p's private memory containing, 22 for each r in V , the value of r .info read by p's last LLX(r ) 23 Let old be the value for f ld returned by p's last LLX(r ) 24 Begin hardware transaction line 29 in SCX 3 . A tagged pointer is simply a pointer that has its least significant bit set to one.…”
Section: Correctness and Progressmentioning
confidence: 99%
“…Consider an operation O that is implemented using the tree update template. One natural way to use HTM to accelerate O is to use the original operation as a fallback path, and then obtain an HTM-based fast path by wrapping O in a transaction, and performing optimizations to improve performance [23]. We call this the 2-path concurrent algorithm (2-path con).…”
Section: Introductionmentioning
confidence: 99%
“…Recent work has identified ways to use hardware transactional memory (HTM) to reduce descriptor allocation [91].…”
Section: Related Workmentioning
confidence: 99%
“…Current (and likely future) implementations of HTM offer no progress guarantees, so one must provide a lock-free fallback path to guarantee lock-free progress. The techniques in [91] accelerate the HTM-based fast path, but do nothing to reduce descriptor allocations on the fallback path. In some workloads, many operations run on the fallback path, so it is important for it to be efficient.…”
Section: Related Workmentioning
confidence: 99%