Proceedings of the Third ACM Symposium on Cloud Computing 2012
DOI: 10.1145/2391229.2391235
|View full text |Cite
|
Sign up to set email alerts
|

How consistent is your cloud application?

Abstract: Current cloud datastores usually trade consistency for performance and availability. However, it is often not clear how an application is affected when it runs under a low level of consistency. In fact, current application designers have basically no tools that would help them to get a feeling of which and how many inconsistencies actually occur for their particular application. In this paper, we propose a generalized approach for detecting consistency anomalies for arbitrary cloud applications accessing vario… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
18
0

Year Published

2013
2013
2021
2021

Publication Types

Select...
6
2
2

Relationship

0
10

Authors

Journals

citations
Cited by 35 publications
(18 citation statements)
references
References 23 publications
(17 reference statements)
0
18
0
Order By: Relevance
“…For instance, the de facto standard YCSB [29] and its extensions [30], [31] introduced database benchmarking based on CRUD interfaces, which are more compatible with modern NoSQL stores like Apache Cassandra [32]. Other approaches such as OLTPBench [33] or BenchFoundry [34], [35] aim to build comprehensive multi-quality benchmarking platforms that also include measurement approaches for qualities beyond performance, e.g., data consistency [10], [36]- [39] or elastic scalability [11], [12]. Beyond this, there are a number of approaches studying performance impacts of TLS on NoSQL datastores [40]- [42], web services [43], and web servers [44].…”
Section: Benchmarkingmentioning
confidence: 99%
“…For instance, the de facto standard YCSB [29] and its extensions [30], [31] introduced database benchmarking based on CRUD interfaces, which are more compatible with modern NoSQL stores like Apache Cassandra [32]. Other approaches such as OLTPBench [33] or BenchFoundry [34], [35] aim to build comprehensive multi-quality benchmarking platforms that also include measurement approaches for qualities beyond performance, e.g., data consistency [10], [36]- [39] or elastic scalability [11], [12]. Beyond this, there are a number of approaches studying performance impacts of TLS on NoSQL datastores [40]- [42], web services [43], and web servers [44].…”
Section: Benchmarkingmentioning
confidence: 99%
“…Wada et al evaluated the staleness of Amazon's SimpleDB using end-user request tracing [81], while Bermbach and Tai evaluated Amazon S3 [22], each quantifying various forms of non-serializable behavior. Golab et al provide algorithms for verifying the linearizability of and sequential consistency arbitrary data stores [51] and Zellag and Kemme provide algorithms for verifying their serializability [85] and other cycle-based isolation anomalies [86]. Probabilistically Bounded Staleness provides time-and version-based staleness predictions for eventually consistent data stores [18].…”
Section: Related Workmentioning
confidence: 99%
“…Zellag and Kemme [31] also present an alternative approach counting consistency anomalies for arbitrary cloudbased applications in transactional and non-transactional datastores. At runtime, their approach builds a dependency graph to detect cycles, i.e., consistency violations.…”
Section: Related Workmentioning
confidence: 99%