Proceedings of the ACM Symposium on Cloud Computing 2019
DOI: 10.1145/3357223.3362736
|View full text |Cite
|
Sign up to set email alerts
|

Sifter

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
8
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 26 publications
(9 citation statements)
references
References 9 publications
0
8
0
Order By: Relevance
“…To ensure the captured data is useful, sampling decisions are coherent per request -a trace is either sampled in its entirety, capturing the full end-to-end execution, or not at all. Sampling effectively reduces computational overheads; these overheads are only paid if a trace is sampled, so they can be easily reduced by reducing the sampling probability (see [22][23][24][25] with references therein).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…To ensure the captured data is useful, sampling decisions are coherent per request -a trace is either sampled in its entirety, capturing the full end-to-end execution, or not at all. Sampling effectively reduces computational overheads; these overheads are only paid if a trace is sampled, so they can be easily reduced by reducing the sampling probability (see [22][23][24][25] with references therein).…”
Section: Related Workmentioning
confidence: 99%
“…This approach, known as head-based sampling, avoids the runtime costs of generating trace data as it occurs uniformly at random, and the resulting data is simply a random subset of requests. In practice, sampling rates can be as low as 0.1% (see [22]).…”
Section: Related Workmentioning
confidence: 99%
“…For a review of tracing tools for large distributed environments, the reader is referred to the survey of Las-Casas et al 34 Numerous projects are available to achieve this goal, most notably Zipkin 35 and Opentelemetry. 36…”
Section: F I G U R Ementioning
confidence: 99%
“…The data used for observation is collected from multiple sources, meaning they must be aligned together carefully to reconstruct the entire sequence of events that led to the failure. Logs, metrics, and traces offer comprehensive visibility into the behavior of the different systems, where the correlation of traces among humans would be a tough one [3]. Uniting fragments of the process across services is a complex thing to do without a proper automated approach.…”
Section: Problem Statementmentioning
confidence: 99%