We introduce a new method to determine the magnitude of completeness M c and its uncertainty. Our method models the entire magnitude range (EMR method) consisting of the self-similar complete part of the frequency-magnitude distribution and the incomplete portion, thus providing a comprehensive seismicity model. We compare the EMR method with three existing techniques, finding that EMR shows a superior performance when applied to synthetic test cases or real data from regional and global earthquake catalogues. This method, however, is also the most computationally intensive. Accurate knowledge of M c is essential for many seismicity-based studies, and particularly for mapping out seismicity parameters such as the b-value of the Gutenberg-Richter relationship. By explicitly computing the uncertainties in M c using a bootstrap approach, we show that uncertainties in b-values are larger than traditionally assumed, especially when considering small sample sizes. As examples, we investigated temporal variations of M c for the 1992 Landers aftershock sequence and found that it was underestimated on average by 0.2 with former techniques. Mapping M c on a global scale, M c reveals considerable spatial variations for the Harvard Centroid Moment Tensor (CMT) (5.3 Յ M c Յ 6.0) and the International Seismological Centre (ISC) catalogue (4.3 Յ M c Յ 5.0).
The 2013 European Seismic Hazard Model (ESHM13) results from a community-based probabilistic seismic hazard assessment supported by the EU-FP7 project "Seismic Hazard Harmonization in Europe" (SHARE, 2009(SHARE, -2013. The ESHM13 is a consistent seismic hazard model for Europe and Turkey which overcomes the limitation of national borders and includes a through quantification of the uncertainties. It is the first completed regional effort contributing to the "Global Earthquake Model" initiative. It might serve as a reference model for various applications, from earthquake preparedness to earthquake risk mitigation strategies, including the update of the European seismic regulations for building design (Eurocode 8), and thus it is useful for future safety assessment and improvement of private and public buildings. Although its results constitute a reference for Europe, they do not replace the existing national design regulations that are in place for seismic design and construction of buildings. The ESHM13 represents a significant improvement compared to previous efforts as it is based on (1) the compilation of updated and harmonised versions of the databases required for probabilistic seismic hazard assessment, (2) the adoption of standard procedures and robust methods, especially for expert elicitation and consensus building among hundreds of European experts, (3) the multi-disciplinary input from all branches of earthquake science and engineering, (4) the direct involvement of the CEN/TC250/SC8 committee in defining output specifications relevant for Eurocode 8 and (5)
[1] During an Enhanced Geothermal System (EGS) experiment, fluid is injected at high pressure into crystalline rock, to enhance its permeability and thus create a reservoir from which geothermal heat can be extracted. The fracturing of the basement caused by these high pore-pressures is associated with microseismicity. However, the relationship between the magnitudes of these induced seismic events and the applied fluid injection rates, and thus pore-pressure, is unknown. Here we show how pore-pressure can be linked to the seismic frequency-magnitude distribution, described by its slope, the b-value. We evaluate the dataset of an EGS in Basel, Switzerland and compare the observed event-size distribution with the outcome of a minimalistic model of porepressure evolution that relates event-sizes to the differential stress s D . We observe that the decrease of b-values with increasing distance of the injection point is likely caused by a decrease in pore-pressure. This leads to an increase of the probability of a large magnitude event with distance and time.
The Southern California Seismic Network (SCSN) has produced the SCSN earthquake catalog from 1932 to the present, a period of more than 77 yrs. This catalog consists of phase picks, hypocenters, and magnitudes. We present the history of the SCSN and the evolution of the catalog, to facilitate user understanding of its limitations and strengths. Hypocenters and magnitudes have improved in quality with time, as the number of stations has increased gradually from 7 to ∼400 and the data acquisition and measuring procedures have become more sophisticated. The magnitude of completeness (M c ) of the network has improved from M c ∼3:25 in the early years to M c ∼1:8 at present, or better in the most densely instrumented areas. Mainshock-aftershock and swarm sequences and scattered individual background earthquakes characterize the seismicity of more than 470,000 events. The earthquake frequency-size distribution has an average b-value of ∼1:0, with M ≥ 6:0 events occurring approximately every 3 yrs. The three largest earthquakes recorded were 1952 M w 7.5 Kern County, 1992 M w 7.3 Landers, and 1999 M w 7
We present a new method for estimating earthquake detection probabilities that avoids assumptions about earthquake occurrence, for example, the event-size distribution, and uses only empirical data: phase data, station information, and network-specific attenuation relations. First, we determine the detection probability for each station as a function of magnitude and hypocentral distance, using data from past earthquakes. Second, we combine the detection probabilities of stations using a basic combinatoric procedure to determine the probability that a hypothetical earthquake with a given size and location could escape detection. Finally, we synthesize detection-probability maps for earthquakes of particular magnitudes and probability-based completeness maps. Because the method relies only on detection probabilities of stations, it can also be used to evaluate hypothetical additions or deletions of stations as well as scenario computations of a network crisis. The new approach has several advantages: completeness is analyzed as a function of network properties instead of earthquake samples; thus, no event-size distribution is assumed. Estimating completeness is becoming possible in regions of sparse data where methods based on parametric earthquake catalogs fail. We find that the catalog of the Southern California Seismic Network (SCSN) has, for most of the region, a lower magnitude of completeness than that computed using traditional techniques, although in some places traditional techniques provide lower estimates. The network reliably records earthquakes smaller than magnitude 1.0 in some places and 1.0 in the seismically active regions. However, it does not achieve the desired completeness of magnitude M L 1:8 everywhere in its authoritative region. A complete detection is achieved at M L 3:4 in the entire authoritative region; thus, at the boundaries, earthquakes as large as M L 3:3 might escape detection.
SUMMARY Geothermal energy is becoming an important clean energy source, however, the stimulation of a reservoir for an Enhanced Geothermal System (EGS) is associated with seismic risk due to induced seismicity. Seismicity occurring due to the water injection at depth have to be well recorded and monitored. To mitigate the seismic risk of a damaging event, an appropriate alarm system needs to be in place for each individual experiment. In recent experiments, the so‐called traffic‐light alarm system, based on public response, local magnitude and peak ground velocity, was used. We aim to improve the pre‐defined alarm system by introducing a probability‐based approach; we retrospectively model the ongoing seismicity in real time with multiple statistical forecast models and then translate the forecast to seismic hazard in terms of probabilities of exceeding a ground motion intensity level. One class of models accounts for the water injection rate, the main parameter that can be controlled by the operators during an experiment. By translating the models into time‐varying probabilities of exceeding various intensity levels, we provide tools which are well understood by the decision makers and can be used to determine thresholds non‐exceedance during a reservoir stimulation; this, however, remains an entrepreneurial or political decision of the responsible project coordinators. We introduce forecast models based on the data set of an EGS experiment in the city of Basel. Between 2006 December 2 and 8, approximately 11 500 m3 of water was injected into a 5‐km‐deep well at high pressures. A six‐sensor borehole array, was installed by the company Geothermal Explorers Limited (GEL) at depths between 300 and 2700 m around the well to monitor the induced seismicity. The network recorded approximately 11 200 events during the injection phase, more than 3500 of which were located. With the traffic‐light system, actions where implemented after an ML 2.7 event, the water injection was reduced and then stopped after another ML 2.5 event. A few hours later, an earthquake with ML 3.4, felt within the city, occurred, which led to bleed‐off of the well. A risk study was later issued with the outcome that the experiment could not be resumed. We analyse the statistical features of the sequence and show that the sequence is well modelled with the Omori–Utsu law following the termination of water injection. Based on this model, the sequence will last 31+29/−14 years to reach the background level. We introduce statistical models based on Reasenberg and Jones and Epidemic Type Aftershock Sequence (ETAS) models, commonly used to model aftershock sequences. We compare and test different model setups to simulate the sequences, varying the number of fixed and free parameters. For one class of the ETAS models, we account for the flow rate at the injection borehole. We test the models against the observed data with standard likelihood tests and find the ETAS model accounting for the on flow rate to perform best. Such a model may in future serve as...
Constraints on the recurrence times of subduction zone earthquakes are important for seismic hazard assessment and mitigation. Models of such megathrust earthquakes often assume that subduction zones are segmented and earthquakes occur quasi-periodically owing to constant tectonic loading. Here we analyse the occurrence of small earthquakes compared to larger ones-the b-values-on a 1,000-km-long section of the subducting Pacific Plate beneath central and northern Japan since 1998. We find that the b-values vary spatially and mirror the tectonic regime. For example, high b-values, indicative of low stress, occur in locations characterized by deep magma chambers and low b-values, or high stress, occur where the subducting and overriding plates are strongly coupled. There is no significant variation in the low b-values to suggest the plate interface is segmented in a way that might limit potential ruptures. Parts of the plate interface that ruptured during the 2011 Tohoku-oki earthquake were highly stressed in the years leading up to the earthquake. Although the stress was largely released during the 2011 rupture, we find that the stress levels quickly recovered to pre-quake levels within just a few years. We conclude that large earthquakes may not have a characteristic location, size or recurrence interval, and might therefore occur more randomly distributed in time.E lastic rebound theory, introduced first by Reid following the 1906 San Francisco earthquake 1 , is one of the foundations of earthquake science and explains how tectonic forces load faults. It states that tectonic stresses build up on a fault over decades, to be released within a major earthquake in seconds. However, it is still unknown if this release is complete and followed by a period of gradual reloading-and thus relative safety-or if sufficient energy remains in the system to allow similar size events more or less immediately. To explore this question, we use a fundamental observation in seismology, the exponential relationship between the frequency and magnitude of earthquakes, known as Gutenberg-Richter law 2 , log 10 (N ) = a − bM, where N is the number of events equal or above magnitude M, and a and b are constants. This relationship is commonly used to infer occurrence rates of infrequent large and hazardous events from the productivity level (a-value) and size distribution (b-value) of abundant smallto-moderate-magnitude seismicity. Although on a global average b ≈ 1, local b-values show substantial spatial variations-that is, in some volumes the proportion of larger magnitudes is higher (b < 1), in others the proportion of small magnitudes exceeds the average expectation (b > 1).Evidence from laboratory experiments 3,4 , numerical modelling 5 , and natural seismicity 6-8 indicates that b-values are negatively correlated with differential stress. Fault patches of such-determined significant stress accumulation have been observed to coincide with locations of subsequent large earthquakes 9,10 . Low differential stress conditions, for exampl...
[1] We discuss the impact of uncertainties in computed coseismic stress perturbations on the seismicity rate changes forecasted through a rate-and state-dependent frictional model. We aim to understand how the variability of Coulomb stress changes affects the correlation between predicted and observed changes in the rate of earthquake production. We use the aftershock activity following the 1992 M7.3 Landers (California) earthquake as a case study. To accomplish these tasks, we first analyze the variability of stress changes resulting from the use of different published slip distributions. We find that the standard deviation of the uncertainty is of the same size as the absolute stress change and that their ratio, the coefficient of variation (CV), is approximately constant in space. This uncertainty has a strong impact on the forecasted aftershock activity if a rate-and-state frictional model is considered. We use the early aftershocks to invert for friction parameters and the coefficient of variation by means of the maximum likelihood method. We show that, when the uncertainties are properly taken into account, the inversion yields stable results, which fit the spatiotemporal aftershock sequence. The analysis of the 1992 Landers sequence demonstrates that accounting for realistic uncertainties in stress changes strongly improves the correlation between modeled and observed seismicity rate changes. For this sequence, we measure a friction parameter As n % 0.017 MPa and a coefficient of stress variation CV = 0.95.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.