Transactions can simplify distributed applications by hiding data distribution, concurrency, and failures from the application developer. Ideally the developer would see the abstraction of a single large machine that runs transactions sequentially and never fails. This requires the transactional subsystem to provide opacity (strict serializability for both committed and aborted transactions), as well as transparent fault tolerance with high availability.As even the best abstractions are unlikely to be used if they perform poorly, the system must also provide high performance.Existing distributed transactional designs either weaken this abstraction or are not designed for the best performance within a data center. This paper extends the design of FaRM -which provides strict serializability only for committed transactions -to provide opacity while maintaining FaRM's high throughput, low latency, and high availability within a modern data center. It uses timestamp ordering based on real time with clocks synchronized to within tens of microseconds across a cluster, and a failover protocol to ensure correctness across clock master failures. FaRM with opacity can commit 5.4 million neworder transactions per second when running the TPC-C transaction mix on 90 machines with 3-way replication.
Observational indications support the hypothesis that many large earthquakes are preceded by accelerating-decelerating seismic release rates which are described by a power law time to failure relation. In the present work, a unified theoretical framework is discussed based on the ideas of non-extensive statistical physics along with fundamental principles of physics such as the energy conservation in a faulted crustal volume undergoing stress loading. We define a generalized Benioff strain function Ω ξ ( t ) = ∑ i = 1 n ( t ) E i ξ ( t ) , where Ei is the earthquake energy, 0 ≤ ξ ≤ 1 . and a time-to-failure power-law of Ω ξ ( t ) derived for a fault system that obeys a hierarchical distribution law extracted from Tsallis entropy. In the time-to-failure power-law followed by Ω ξ ( t ) the existence of a common exponent mξ which is a function of the non-extensive entropic parameter q is demonstrated. An analytic expression that connects mξ with the Tsallis entropic parameter q and the b value of Gutenberg—Richter law is derived. In addition the range of q and b values that could drive the system into an accelerating stage and to failure is discussed, along with precursory variations of mξ resulting from the precursory b-value anomaly. Finally our calculations based on Tsallis entropy and the energy conservation give a new view on the empirical laws derived in the literature, the associated average generalized Benioff strain rate during accelerating period with the background rate and connecting model parameters with the expected magnitude of the main shock.
A priori, locking seems easy: To protect shared data from concurrent accesses, it is sufficient to lock before accessing the data and unlock after. Nevertheless, making locking efficient requires finetuning (a) the granularity of locks and (b) the locking strategy for each lock and possibly each workload. As a result, locking can become very complicated to design and debug.We present GLS, a middleware that makes lock-based programming simple and effective. GLS offers the classic lock-unlock interface of locks. However, in contrast to classic lock libraries, GLS does not require any effort from the programmer for allocating and initializing locks, nor for selecting the appropriate locking strategy. With GLS, all these intricacies of locking are hidden from the programmer. GLS is based on GLK, a generic lock algorithm that dynamically adapts to the contention level on the lock object. GLK is able to deliver the best performance among simple spinlocks, scalable queue-based locks, and blocking locks. Furthermore, GLS offers several debugging options for easily detecting various lockrelated issues, such as deadlocks.We evaluate GLS and GLK on two modern hardware platforms, using several software systems (i.e., HamsterDB, Kyoto Cabinet, Memcached, MySQL, SQLite) and show how GLK improves their performance by 23% on average, compared to their default locking strategies. We illustrate the simplicity of using GLS and its debugging facilities by rewriting the synchronization code for Memcached and detecting two potential correctness issues. CCS Concepts•Computing methodologies → Shared memory algorithms; Concurrent algorithms; •Computer systems organization → Multicore architectures; Keywords Locking; Adaptive Locking; Locking Middleware; Locking Runtime; Synchronization; Multi-cores; Performance * Work done while the author was at EPFL. Currently at Google. † Author names appear in alphabetical order.Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from permissions@acm.org.
On 27 September 2021, a shallow earthquake with focal depth of 10 km and moment magnitude Mw6.0 occurred onshore in central Crete (Greece). The evolution of possible preseismic patterns in the area of central Crete before the Mw6.0 event was investigated by applying the method of multiresolution wavelet analysis (MRWA), along with that of natural time (NT). The monitoring of preseismic patterns by critical parameters defined by NT analysis, integrated with the results of MRWA as the initiation point for the NT analysis, forms a promising framework that may lead to new universal principles that describe the evolution patterns before strong earthquakes. Initially, we apply MRWA to the interevent time series of the successive regional earthquakes in order to investigate the approach of the regional seismicity towards critical stages and to define the starting point of the natural time domain. Then, using the results of MRWA, we apply the NT analysis, showing that the regional seismicity approached criticality for a prolonged period of ~40 days before the occurrence of the Mw6.0 earthquake, when the κ1 natural time parameter reached the critical value of κ1 = 0.070, as suggested by the NT method.
A widely felt strong shallow earthquake with Mw 6.3 magnitude occurred in Thessaly (Central Greece) on March 3, 2021. This recent strong event attracted our interest to apply and evaluate the capabilities of the Accelerating Deformation method. Based on the recently proposed generalized Benioff strain idea which could be justified by the terms of Non-Extensive Statistical Physics (NESP), the common critical exponent was calculated in order to define the critical stage before a strong event. The present analysis comprised a complex spatiotemporal iterative procedure to examine the possible seismicity patterns at a broad region and identify the best one associated with the preparation process before the strong event. The starting time of the accelerating period, the size and location of the critical area are unknown parameters to be determined. Furthermore, although, the time of failure is already known, in the present research it was not set as a fixed value in the algorithm to define the other unknown parameters but instead different catalogue ending dates have been tried out to be with an objective way. The broad region to be investigated was divided with a square mesh and the search of events around a point has been carried on with different size circular and elliptical shapes. Among the obtained results, the solution which exhibits the most dominant scaling law behavior as well as the one which exhibits the smallest spatial area and yet the more dominant scaling law behavior are presented.
Greece exhibits the highest seismic activity in Europe, manifested in intense seismicity with large magnitude events and frequent earthquake swarms. In the present work, we analyzed the spatiotemporal properties of recent earthquake swarms that occurred in the broader area of Greece using the Non-Extensive Statistical Physics (NESP) framework, which appears suitable for studying complex systems. The behavior of complex systems, where multifractality and strong correlations among the elements of the system exist, as in tectonic and volcanic environments, can adequately be described by Tsallis entropy (Sq), introducing the Q-exponential function and the entropic parameter q that expresses the degree of non-additivity of the system. Herein, we focus the analysis on the 2007 Trichonis Lake, the 2016 Western Crete, the 2021–2022 Nisyros, the 2021–2022 Thiva and the 2022 Pagasetic Gulf earthquake swarms. Using the seismicity catalogs for each swarm, we investigate the inter-event time (T) and distance (D) distributions with the Q-exponential function, providing the qT and qD entropic parameters. The results show that qT varies from 1.44 to 1.58, whereas qD ranges from 0.46 to 0.75 for the inter-event time and distance distributions, respectively. Furthermore, we describe the frequency–magnitude distributions with the Gutenberg–Richter scaling relation and the fragment–asperity model of earthquake interactions derived within the NESP framework. The results of the analysis indicate that the statistical properties of earthquake swarms can be successfully reproduced by means of NESP and confirm the complexity and non-additivity of the spatiotemporal evolution of seismicity. Finally, the superstatistics approach, which is closely connected to NESP and is based on a superposition of ordinary local equilibrium statistical mechanics, is further used to discuss the temporal patterns of the earthquake evolution during the swarms.
Distributed transactions on modern RDMA clusters promise high throughput and low latency for scale-out workloads. As such, they can be particularly beneficial to large OLTP workloads, which require both. However, achieving good performance requires tuning the physical layout of the data store to the application and the characteristics of the underlying hardware. Manually tuning the physical design is error-prone, as well as time-consuming, and it needs to be repeated when the workload or the hardware change. In this paper we present SPADE, a physical design tuner for OLTP workloads in FaRM, a main memory distributed computing platform that leverages modern networks with RDMA capabilities. SPADE automatically decides on the partitioning of data, tunes the index and storage parameters, and selects the right mix of direct remote data accesses and function shipping to maximize performance. To achieve this, SPADE combines information derived from the workload and the schema with low-level hardware and network performance characteristics gathered through micro-benchmarks. Using SPADE, the tuned physical design achieves significant throughput and latency improvements over a manual design for two widely used OLTP benchmarks, TATP and TPC-C, sometimes using counter-intuitive tuning decisions. CCS CONCEPTS • Information systems → Relational parallel and distributed DBMSs; Distributed storage; Record and block layout; • Networks → Network performance modeling;
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.