1996
DOI: 10.1007/s004460050019
|View full text |Cite
|
Sign up to set email alerts
|

Linearizable counting networks

Abstract: Abstract. An adding network is a distributed data structure that supports a concurrent, lock-free, low-contention implementation of a fetch&add counter; a counting network is an instance of an adding network that supports only fetch&increment. We present a lower bound showing that adding networks have inherently high latency. Any adding network powerful enough to support addition by at least two values a and b, where |a| > |b| > 0, has sequential executions in which each token traverses Ω(n/c) switching elemen… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
55
0

Year Published

1998
1998
2014
2014

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(56 citation statements)
references
References 24 publications
(9 reference statements)
1
55
0
Order By: Relevance
“…Though they are wait-free and scalable, the counting networks of [1] are not linearizable. Herlihy, Shavit, and Waarts demonstrated that counting networks can be adapted to implement linearizable counters [16]. However, the first counting network they present is not lock-free, while the others are not scalable, since each operation has to access Ω(N ) base objects.…”
Section: Related Workmentioning
confidence: 99%
“…Though they are wait-free and scalable, the counting networks of [1] are not linearizable. Herlihy, Shavit, and Waarts demonstrated that counting networks can be adapted to implement linearizable counters [16]. However, the first counting network they present is not lock-free, while the others are not scalable, since each operation has to access Ω(N ) base objects.…”
Section: Related Workmentioning
confidence: 99%
“…If the responses queue is not empty, the length of its first range is computed and stored to the respLen local variable (statements 11,12). Then the length of the next requests entry is computed and stored to reqsLen (statements 13,14). The minimum of these two values is stored to send (statement 15).…”
Section: The Combining Processmentioning
confidence: 99%
“…Though they are wait-free and allow parallelism, the counting networks of [2] are non-linearizable. Herlihy et al demonstrated that counting networks can be adapted to implement wait-free linearizable counters [13]. However, the first counting network they present is blocking while the others do not provide parallelism, as each operation has to access Ω(N ) base objects.…”
Section: Introductionmentioning
confidence: 99%
“…Adve and Boehm [1] showed that the choice of memory model determines fundamental properties of a concurrent programming environment, and makes a fundamental tradeoff between scalability and ease of programming. Concurrent hardware and software can scale most effectively with less enforcement of memory ordering, while the simplest implementation techniques rely on stricter ordering models [1,4,30].…”
Section: Design Constraints and Assumptionsmentioning
confidence: 99%
“…However, this strict ordering incurs a high cost [30]. Mutual exclusion limits concurrency to at most one thread accessing any particular piece of shared data at a time, with other threads blocked.…”
Section: Introductionmentioning
confidence: 99%