2016
DOI: 10.1007/978-3-319-49583-5_4
|View full text |Cite
|
Sign up to set email alerts
|

A Portable Lock-Free Bounded Queue

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 6 publications
(3 citation statements)
references
References 19 publications
0
3
0
Order By: Relevance
“…However, it appears that minimizing memory overhead in dynamic concurrent data structures has not been in the highlight until recently. A standard way to implement a lock-free bounded queue, the major running example of this paper, is to use descriptors [15,18] or additional meta-information per each element [4,16,17,19]. The overhead of resulting solutions is proportional to the queue size: a descriptor contains an additional Ω(1) data to distinguish it from a value, while an additional meta-information is Ω(1) memory appended to the value by the definition.…”
Section: Related Workmentioning
confidence: 99%
“…However, it appears that minimizing memory overhead in dynamic concurrent data structures has not been in the highlight until recently. A standard way to implement a lock-free bounded queue, the major running example of this paper, is to use descriptors [15,18] or additional meta-information per each element [4,16,17,19]. The overhead of resulting solutions is proportional to the queue size: a descriptor contains an additional Ω(1) data to distinguish it from a value, while an additional meta-information is Ω(1) memory appended to the value by the definition.…”
Section: Related Workmentioning
confidence: 99%
“…In the previous section, we introduced the notion of a shared task pool. While several lock-free and wait-free data structures for the construction of multi-producer multiconsumer queues exist (Feldman and Dechev 2015;Michael and Scott 1996;Pirkelbauer et al 2016;Yang and Mellor-Crummey 2016), we posit that many of these solutions incur a significant overhead (in the form of contention or the use of expensive atomic operations) even when a thread produces and consumes its own tasks. Thus, we opted for a design where the pool consists of n task queues where each of the n threads owns its own queue.…”
Section: Task Poolmentioning
confidence: 99%
“…The design guarantees that at least one element remains in the queue, thus head and tail will never be null. The inner bounded queue uses a lock-free single-producer and multi-consumer scheme similar to a hybrid circular queue (Pirkelbauer et al 2016), except that we do not reuse buffer elements. Figure 5(a) shows the core interface of our task queue.…”
Section: Task Poolmentioning
confidence: 99%