1995
DOI: 10.1016/0141-9331(95)93086-x
|View full text |Cite
|
Sign up to set email alerts
|

Fast context switches: compiler and architectural support for preemptive scheduling

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

1996
1996
2014
2014

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 28 publications
(13 citation statements)
references
References 6 publications
0
13
0
Order By: Relevance
“…Others have developed techniques to reduce context switch overheads but have failed to synergistically exploit the property of idempotence. Snyder et al describe a compiler technique where each instruction is accompanied by a bit that indicates whether that instruction is a "fast context switch point" [37]. Zhou and Petrov also demonstrate how to pick low-overhead context switch points where there are few live registers [43].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Others have developed techniques to reduce context switch overheads but have failed to synergistically exploit the property of idempotence. Snyder et al describe a compiler technique where each instruction is accompanied by a bit that indicates whether that instruction is a "fast context switch point" [37]. Zhou and Petrov also demonstrate how to pick low-overhead context switch points where there are few live registers [43].…”
Section: Related Workmentioning
confidence: 99%
“…Restart markers [18] Loop analysis; not generalizable Idempotent processors [9] No liveness or SIMD analysis Fast switch points [37,43] [14] H/W and performance overheads State snapshotting [28] Exposes microarchitecture details Table 3: Prior work on exception and speculation recovery.…”
Section: Approach Weaknessmentioning
confidence: 99%
“…4) Specific hardware services: Integrating new hardware services in a platform to improve the system predictability and reduce overheads is also possible [8]- [11]. These services show good results at reducing overheads and are often added at the processor level.…”
Section: Related Workmentioning
confidence: 99%
“…This approach however, usually entails a large degree of pessimism [4], [5] and a task set which is actually schedulable may end up being deemed unschedulable by the modified schedulability test [7]. The second approach consists in either using properties of the architecture or integrating new hardware services in the platform to improve the system predictability and reduce overheads [8]- [11], usually these services require processor modifications. Finally, designing a specific architecture with high predictability and specific design flow is also an option [12].…”
Section: Introductionmentioning
confidence: 99%
“…Several interrupts handling schemes to reduce the size of contexts to be switched to minimize the interrupt latency for VLIW and DSP processors are presented in [15][9] [22]. All these mechanisms need the support of a relative compiler and even processor architecture.…”
Section: Related Workmentioning
confidence: 99%