Proceedings of the Thirteenth ACM Symposium on Operating Systems Principles 1991
DOI: 10.1145/121132.286599
|View full text |Cite
|
Sign up to set email alerts
|

Empirical studies of competitve spinning for a shared-memory multiprocessor

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
65
0

Year Published

1994
1994
2023
2023

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 107 publications
(65 citation statements)
references
References 11 publications
0
65
0
Order By: Relevance
“…MCS-NB and CLH-NB are abortable queue-based locks with non-blocking timeout [27]. We also test spinthen-yield variants [13] of each lock in which threads yield the processors after exceeding a wait threshold. Finally, we test preemption-safe locks dependent on scheduling control APIs: CLH-CSP, TAS-CSP, Handshaking, CLH-PM, and SmartQ.…”
Section: Microbenchmark Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…MCS-NB and CLH-NB are abortable queue-based locks with non-blocking timeout [27]. We also test spinthen-yield variants [13] of each lock in which threads yield the processors after exceeding a wait threshold. Finally, we test preemption-safe locks dependent on scheduling control APIs: CLH-CSP, TAS-CSP, Handshaking, CLH-PM, and SmartQ.…”
Section: Microbenchmark Resultsmentioning
confidence: 99%
“…One might expect a spin-then-yield policy [13] to allow other locks to match TP locks in preemption adaptivity. In Figure 9 we test this hypothesis with a 50 µs spinning time threshold and a 2 cache line critical section.…”
Section: Comparison To User-level Locksmentioning
confidence: 99%
See 1 more Smart Citation
“…The three events that trigger page migrations require at most as many page migrations as the size of the memory affinity set of a thread which is migrated, preempted, or resumed. Clearly, if the thread that claims a page keeps executing on node k for less than t mas , there is no time to amortize the cost of migrating the pages to k. If the thread keeps executing on k for more than t mas , the pages will move to k in time to reduce some of the latency of memory accesses issued by processors in k. 6 Since the algorithm does not know in advance how long each thread will keep running on the same node, it chooses to competitively wait for t thr before migrating pages to k [13].…”
Section: Page Migration Criterionmentioning
confidence: 99%
“…On a failed synchronization, the trap handler is responsible for implementing the waiting algorithm that decides whether to poll or to block the thread. Karlin et al 11 and Lim and Agarwal 14 i n vestigate the performance of various waiting algorithms. They show polling for some length of time before blocking can lead to better performance, and investigate various methods for determining how long to poll before blocking.…”
Section: Handling Failed Synchronizations In Softwarementioning
confidence: 99%