Proceedings of the 2005 ACM SIGMETRICS International Conference on Measurement and Modeling of Computer Systems 2005
DOI: 10.1145/1064212.1064241
|View full text |Cite
|
Sign up to set email alerts
|

Fundamental bounds on the accuracy of network performance measurements

Abstract: This paper considers the basic problem of "how accurate can we make Internet performance measurements". The answer is somewhat counter-intuitive in that there are bounds on the accuracy of such measurements, no matter how many probes we can use in a given time interval, and thus arises a type of Heisenberg inequality describing the bounds in our knowledge of the performance of a network. The results stem from the fact that we cannot make independent measurements of a system's performance: all such measures are… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

2
37
0

Year Published

2007
2007
2015
2015

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 34 publications
(39 citation statements)
references
References 37 publications
(60 reference statements)
2
37
0
Order By: Relevance
“…While active measurement-based compliance monitoring has received some attention in the past, e.g., [18], there has been little validation in realistic environments where a reliable basis for comparison can be established. There has been limited work addressing the accuracy of some active measurement approaches; exceptions are found in [9], [32], [37]. Since the guarantee of performance metrics in SLAs is often explicitly tied to the collection of revenue from customers, statistical validity and accuracy of measurements as well as more practical issues such as loss of measurement data are of critical importance [34].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…While active measurement-based compliance monitoring has received some attention in the past, e.g., [18], there has been little validation in realistic environments where a reliable basis for comparison can be established. There has been limited work addressing the accuracy of some active measurement approaches; exceptions are found in [9], [32], [37]. Since the guarantee of performance metrics in SLAs is often explicitly tied to the collection of revenue from customers, statistical validity and accuracy of measurements as well as more practical issues such as loss of measurement data are of critical importance [34].…”
Section: Related Workmentioning
confidence: 99%
“…Although passive measurements (e.g., via SNMP) may provide high accuracy for a metric such as loss on a link-by-link basis, they may be insufficient for estimating the performance of customer traffic, since it is not possible with standard SNMP data to evaluate per-flow performance. Thus, although there are situations where active measurements may be too heavyweight or yield inaccurate results [9], [32], [37], they nonetheless remain a key mechanism for SLA compliance monitoring.…”
Section: Introductionmentioning
confidence: 99%
“…Choosing the optimal duration S requires a complete knowledge about link loss processes at small time-scales and the effect of active probes on the network. These questions have been partially addressed in the literature [25,27] and are outside the scope of this paper. Instead, we use a heuristic that chooses S = 1000 in our simulations and experiments.…”
Section: Performance Modelmentioning
confidence: 99%
“…In such cases, the router queue may fill up and result in a loss before the sender receives and responds to this congestion feedback. End-host based measurements of the state of the queue essentially entails sampling the queue at certain times and the fundamental limits of such measurements have been recently highlighted [26,27]. Problems of oversampling the queue lengths in router based RED mechanisms have been studied in [16].…”
Section: Overview Of Related Workmentioning
confidence: 99%
“…In order to evaluate the impact of higher sampling, we computed the instantaneous RTT upon the receipt of each acknowledgment 2 and used a simple fixed threshold for determining that the flow is in the high congestion state. Taking RTT samples on each packet addresses some of the concerns raised about end-host measurements [27] and reduces sampling errors. Surprisingly, as shown in the graph, the prediction efficiency of this signal was higher than that of Vegas for the six test cases of the traffic that we have considered.…”
Section: Improving Congestion Predictionmentioning
confidence: 99%