1985
DOI: 10.21236/ada611686
|View full text |Cite
|
Sign up to set email alerts
|

A Trace-Driven Analysis of the UNIX 4.2BSD File System

Abstract: \Ve analyzed the UNIX 4.2 BSD file system by recording activity in trace files and writing programs to analyze the traces. The trace analysis shows that the average file system bandwidth needed per user is low (a few hundred bytes per second). Most of the files accessed are short, are open a short time, and are accessed sequentiall y. Most new informatio n is deleted or overwritten within a few minutes of its creation. We wrote a simulator that uses the traces to predict the performanc e of caches for disk blo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
15
0

Year Published

1986
1986
2018
2018

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(15 citation statements)
references
References 7 publications
0
15
0
Order By: Relevance
“…There are 3 major tracing studies of general file systems: the BSD and Sprite studies [1,14], which were closely related and examined an academic environment. The 3 rd study examined in detail the file usage under VMS at a number of commercial sites [15].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…There are 3 major tracing studies of general file systems: the BSD and Sprite studies [1,14], which were closely related and examined an academic environment. The 3 rd study examined in detail the file usage under VMS at a number of commercial sites [15].…”
Section: Related Workmentioning
confidence: 99%
“…There is an extensive body of literature on usage patterns for file systems [1,5,9,11,14], and it has helped shape file system designs [8,13,17] that perform quite well. However, the world of computing has undergone major changes since the last usage study was performed in 1991; not only have computing and network capabilities increased beyond expectations, but the integration of computing in all aspects of professional life has produced new generations of systems and applications that no longer resemble the computer operations of the late eighties.…”
Section: Introductionmentioning
confidence: 99%
“…This is especially true when we refer to scaling properties in networks and network traffic. We can talk about the power law scaling inherent in file sizes on servers [11,16] or in the duration of user sessions [5] but we can also mean the scaling behavior of the traffic rate on a link (measured in the number of packets per time unit passing an observation point). Underlying these different definitions is the intuitive notion that the object we are studying (whether it is a process, measure, or function) has no inherent characteristic scale; it enjoys scale invariance.…”
Section: What Does Scaling Mean?mentioning
confidence: 99%
“…Any file that was both created and deleted between the same pair of snapshots will not appear in any snapshot. Trace-based file system studies [1] [12] have shown that most files live for less than the twenty-four hours between successive snapshots. These files may have a significant effect on the state of the longer lived files on the file system.…”
Section: Generating a Workloadmentioning
confidence: 99%
“…Ousterhout et al performed such a study on the 4.2BSD UNIX file system [12]. In a follow up study, Baker et al examined changes in file access patterns six years later, and also investigated issues of file sharing in the Sprite distributed operating system [1].…”
Section: Related Workmentioning
confidence: 99%