Proceedings of the 2008 International Workshop on Software Engineering in East and South Europe 2008
DOI: 10.1145/1370868.1370881
|View full text |Cite
|
Sign up to set email alerts
|

Teaching operating systems

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2011
2011
2012
2012

Publication Types

Select...
2

Relationship

2
0

Authors

Journals

citations
Cited by 2 publications
(11 citation statements)
references
References 2 publications
0
11
0
Order By: Relevance
“…The difference in access latency is expressed as cache miss penalty and depends largely on the cache level where the cache miss occurs. Both vendor documentation and experimental measurements give ranges of values valid for particular platforms and workloads, a general rule is to expect access times in units of processor cycles for first level cache, tens of cycles for second level cache, and hundreds of cycles for memory [7,4]. For an example of how a cache miss penalty can depend on the amount of data accessed and therefore the cache level involved, see Figure 1 (measured on an Intel Xeon processor in [7]).…”
Section: Caches: Not Just Sizementioning
confidence: 99%
See 1 more Smart Citation
“…The difference in access latency is expressed as cache miss penalty and depends largely on the cache level where the cache miss occurs. Both vendor documentation and experimental measurements give ranges of values valid for particular platforms and workloads, a general rule is to expect access times in units of processor cycles for first level cache, tens of cycles for second level cache, and hundreds of cycles for memory [7,4]. For an example of how a cache miss penalty can depend on the amount of data accessed and therefore the cache level involved, see Figure 1 (measured on an Intel Xeon processor in [7]).…”
Section: Caches: Not Just Sizementioning
confidence: 99%
“…The choice of the technical details is based mostly on our performance evaluation work [7,8,4,5], where we have analyzed the reasons behind numerous surprising performance anomalies on recent computer architectures of the x86 family. Other articles treat particular issues in more depth and for more platforms, we provide references as appropriate.…”
Section: Introductionmentioning
confidence: 99%
“…The motivation for using the linear approximation stems from some past observations that have often revealed a roughly linear dependency between some parameters of the workload, such as the cache miss rate or the range of traversed addresses, and the operation durations [4,1]. The choice of system utilization as the argument of the linear function is based on the assumption that higher utilization means more system activity, which in turn means more opportunities for generating cache misses, traversing addresses, or other forms of resource sharing.…”
Section: Queueing Petri Net Modelsmentioning
confidence: 99%
“…Although these effects have been well documented by measurement [6,1,14,21], they are typically ignored in software performance modeling. And although functional models of memory caches do exist [10,20,25], their complex inputs and other features make application in software performance modeling difficult [5,4].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation