2013
DOI: 10.2200/s00499ed1v01y201304cac023
|View full text |Cite|
|
Sign up to set email alerts
|

Shared-Memory Synchronization

Abstract: Synthesis Lectures on Computer Architecture publishes 50-to 100-page publications on topics pertaining to the science and art of designing, analyzing, selecting and interconnecting hardware components to create computers that meet functional, performance and cost goals. e scope will largely follow the purview of premier computer architecture conferences, such as ISCA, HPCA, MICRO, and ASPLOS. ABSTRACTSince the advent of time sharing in the 1960s, designers of concurrent and parallel systems have needed to syn… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
53
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 42 publications
(53 citation statements)
references
References 221 publications
(217 reference statements)
0
53
0
Order By: Relevance
“…The growing size of multicore machines is likely to shift the design space in the NUMA and CC-NUMA direction, requiring a significant rehash of existing concurrent algorithms and synchronization mechanisms [David et al 2013;Eyerman and Eeckhout 2010;Johnson et al 2010;Scott 2013]. This article tackles the most basic of the multicore synchronization algorithms, the lock, presenting a simple new lock design approach-lock cohorting-fit for NUMA machines.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…The growing size of multicore machines is likely to shift the design space in the NUMA and CC-NUMA direction, requiring a significant rehash of existing concurrent algorithms and synchronization mechanisms [David et al 2013;Eyerman and Eeckhout 2010;Johnson et al 2010;Scott 2013]. This article tackles the most basic of the multicore synchronization algorithms, the lock, presenting a simple new lock design approach-lock cohorting-fit for NUMA machines.…”
Section: Resultsmentioning
confidence: 99%
“…As such, they are NUMA-oblivious. Scott [2013] notes that MCS may be preferable to CLH in some NUMA environments as the queue "node" structures on which threads busy-wait will migrate between threads under CLH but do not circulate under MCS.…”
Section: Empirical Evaluationmentioning
confidence: 99%
“…8 Because of the overhead of maintaining the lock ownership, implementers can decide against reentrant locks, which also typically use a counter field. 15 The rationale behind optimistic spinning is if the thread that owns the lock is running, then it is likely to release the lock soon. In practice, a Linux kernel mutex or rw_semaphore (reader-writer semaphore), the two most commonly used locks throughout the system, can follow up to three possible paths when acquiring the lock, depending on its current state: 12 ˲ Fastpath.…”
Section: Contributing Factors To Poor Lock Scalingmentioning
confidence: 99%
“…Thus, if the average time a thread expects to wait is less than twice the context-switch time, then spinning will actually be faster than blocking. 15 The quality-of-service guarantee is another factor to consider when choosing between spinning and sleeping locks, particularly in realtime systems. Blocking on larger NUMA systems can ultimately starve the system of resources.…”
Section: Contributing Factors To Poor Lock Scalingmentioning
confidence: 99%
“…For shared variables, two memory operations are conflicting if both operations access the same variable, while one of them is a write access. Conflicting synchronization operations can be defined in a similar way [24]. [25] presented a hierarchy of interleaving coverage criteria for concurrent programs, based on different concurrency fault models, but did not consider input-directed coverage criteria in this context.…”
Section: Related Workmentioning
confidence: 99%