2009
DOI: 10.1007/s00446-009-0078-4
|View full text |Cite
|
Sign up to set email alerts
|

Bounded-wait combining: constructing robust and high-throughput shared objects

Abstract: Shared counters are among the most basic coordination structures in distributed computing. Known implementations of shared counters are either blocking, non-linearizable, or have a sequential bottleneck. We present the first counter algorithm that is both linearizable, nonblocking, and can provably achieve high throughput in k-synchronous executions -executions in which process speeds vary by at most a constant factor k. The algorithm is based on a novel variation of the software combining paradigm that we cal… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2010
2010
2019
2019

Publication Types

Select...
3
1

Relationship

2
2

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 23 publications
(31 reference statements)
0
3
0
Order By: Relevance
“…Colvin and Grobes [5] presented a somewhat simplified version of our algorithm and proved its correctness by using the PVS [22] theorem prover. Recently, Hendler and Kutten [11] introduced bounded-wait combining, a technique by which asymptotically high-throughput lock-free linearizable implementations of objects that support combinable operations (such as counters, stacks, and queues) can be constructed.…”
Section: Discussionmentioning
confidence: 99%
“…Colvin and Grobes [5] presented a somewhat simplified version of our algorithm and proved its correctness by using the PVS [22] theorem prover. Recently, Hendler and Kutten [11] introduced bounded-wait combining, a technique by which asymptotically high-throughput lock-free linearizable implementations of objects that support combinable operations (such as counters, stacks, and queues) can be constructed.…”
Section: Discussionmentioning
confidence: 99%
“…The collide function, presented in Figure 2-(d), implements the elimination-combining backoff algorithm performed after a multi-op fails on the central stack. 3 It receives as its input a pointer to the first multiOp record in a multi-op list. A delegate thread executing the function first registers by writing to its entry in the location array (line 46) a pointer to its multiOp structure, thus advertising itself to other threads that may access the elimination-combining layer .…”
Section: Elimination-combining Layer Functionsmentioning
confidence: 99%
“…Two key synchronization paradigms for the construction of scalable concurrent data-structures in general, and concurrent stacks in particular, are software combining [13,3,4] and elimination [1,10]. Eliminationbased concurrent data-structures allow operations with reverse semantics (such as push and pop stack operations) to "collide" and exchange values without having to access a central location.…”
Section: Introductionmentioning
confidence: 99%