Understanding and forecasting earthquake occurrences is presumably linked to understanding the stress distribution in the Earth's crust. This cannot be measured instrumentally with useful coverage. However, the size distribution of earthquakes, quantified by the Gutenberg‐Richter b value, is possibly a proxy to differential stress conditions and could therewith act as a crude stress‐meter wherever seismicity is observed. In this study, we improve the methodology of b value imaging for application to a high‐resolution 3‐D analysis of a complex fault network. In particular, we develop a distance‐dependent sampling algorithm and introduce a linearity measure to restrict our output to those regions where the magnitude distribution strictly follows a power law. We assess the catalog completeness along the fault traces using the Bayesian Magnitude of Completeness method and systematically image b values for 243 major fault segments in California. We identify and report b value structures, revisiting previously published features, e.g., the Parkfield asperity, and documenting additional anomalies, e.g., along the San Andreas and Northridge faults. Combining local b values with local earthquake productivity rates, we derive probability maps for the annual potential of one or more M6 events as indicated by the microseismicity of the last three decades. We present a physical concept of how different stressing conditions along a fault surface may lead to b value variation and explain nonlinear frequency‐magnitude distributions. Detailed spatial b value information and its physical interpretation can advance our understanding of earthquake occurrence and ideally lead to improved forecasting ability.
Assessing the completeness magnitude M c of earthquake catalogs is an essential prerequisite for any seismicity analysis. We employ a simple model to compute M c in space based on the proximity to seismic stations in a network. We show that a relationship of the form M pred c d ad b c, with d the distance to the kth nearest seismic station, fits the observations well, k depending on the minimum number of stations being required to trigger an event declaration in a catalog. We then propose a new M c mapping approach, the Bayesian magnitude of completeness (BMC) method, based on a two-step procedure: (1) a spatial resolution optimization to minimize spatial heterogeneities and uncertainties in M c estimates and (2) a Bayesian approach that merges prior information about M c based on the proximity to seismic stations with locally observed values weighted by their respective uncertainties. Contrary to the current M c mapping procedures, the radius that defines which earthquakes to include in the local magnitude distribution is chosen according to an objective criterion, and there are no gaps in the spatial estimation of M c. The method solely requires the coordinates of seismic stations. Here, we investigate the Taiwan Central Weather Bureau (CWB) seismic network and earthquake catalog over the period 1994-2010.
The Epidemic‐Type Aftershock Sequence (ETAS) model is widely used to describe the occurrence of earthquakes in space and time, but there has been little discussion dedicated to the limits of, and influences on, its estimation. Among the possible influences we emphasize in this article the effect of the cutoff magnitude, Mcut, above which parameters are estimated; the finite length of earthquake catalogs; and missing data (e.g., during lively aftershock sequences). We analyze catalogs from Southern California and Italy and find that some parameters vary as a function of Mcut due to changing sample size (which affects, e.g., Omori's c constant) or an intrinsic dependence on Mcut (as Mcut increases, absolute productivity and background rate decrease). We also explore the influence of another form of truncation—the finite catalog length—that can bias estimators of the branching ratio. Being also a function of Omori's p value, the true branching ratio is underestimated by 45% to 5% for 1.05 < p < 1.2. Finite sample size affects the variation of the branching ratio estimates. Moreover, we investigate the effect of missing aftershocks and find that the ETAS productivity parameters (α and K0) and the Omori's c and p values are significantly changed for Mcut < 3.5. We further find that conventional estimation errors for these parameters, inferred from simulations that do not account for aftershock incompleteness, are underestimated by, on average, a factor of 8.
The hypothesis that earthquake foreshocks have a prognostic value is challenged by simulations of the normal behaviour of seismicity, where no distinction between foreshocks, mainshocks and aftershocks can be made. In the former view, foreshocks are passive tracers of a tectonic preparatory process that yields the mainshock (i.e., loading by aseismic slip) while in the latter, a foreshock is any earthquake that triggers a larger one. Although both processes can coexist, earthquake prediction is plausible in the first case while virtually impossible in the second. Here I present a meta-analysis of 37 foreshock studies published between 1982 and 2013 to show that the justification of one hypothesis or the other depends on the selected magnitude interval between minimum foreshock magnitude mmin and mainshock magnitude M. From this literature survey, anomalous foreshocks are found to emerge when mmin < M − 3.0. These results suggest that a deviation from the normal behaviour of seismicity may be observed only when microseismicity is considered. These results are to be taken with caution since the 37 studies do not all show the same level of reliability. These observations should nonetheless encourage new research in earthquake predictability with focus on the potential role of microseismicity.
The rise in the frequency of anthropogenic earthquakes due to deep fluid injections is posing serious economic, societal, and legal challenges to many geo-energy and waste-disposal projects. Existing tools to assess such problems are still inherently heuristic and mostly based on expert elicitation (so-called clinical judgment). We propose, as a complementary approach, an adaptive traffic light system (ATLS) that is function of a statistical model of induced seismicity. It offers an actuarial judgement of the risk, which is based on a mapping between earthquake magnitude and risk. Using data from six underground reservoir stimulation experiments, mostly from Enhanced Geothermal Systems, we illustrate how such a data-driven adaptive forecasting system could guarantee a risk-based safety target. The proposed model, which includes a linear relationship between seismicity rate and flow rate, as well as a normal diffusion process for post-injection, is first confirmed to be representative of the data. Being integrable, the model yields a closed-form ATLS solution that is both transparent and robust. Although simulations verify that the safety target is consistently ensured when the ATLS is applied, the model from which simulations are generated is validated on a limited dataset, hence still requiring further tests in additional fluid injection environments.
Dynamic risk processes, which involve interactions at the hazard and risk levels, have yet to be clearly understood and properly integrated into probabilistic risk assessment. While much attention has been given to this aspect lately, most studies remain limited to a small number of site-specific multi-risk scenarios. We present a generic probabilistic framework based on the sequential Monte Carlo Method to implement coinciding events and triggered chains of events (using a variant of a Markov chain), as well as time-variant vulnerability and exposure. We consider generic perils based on analogies with real ones, natural and man-made. Each simulated time series corresponds to one risk scenario, and the analysis of multiple time series allows for the probabilistic assessment of losses and for the recognition of more or less probable risk paths, including extremes or low-probability-high-consequences chains of events. We find that extreme events can be captured by adding more knowledge on potential interaction processes using in a brick-by-brick approach. We introduce the concept of risk migration matrix to evaluate how multi-risk participates to the emergence of extremes, and we show that risk migration (i.e., clustering of losses) and risk amplification (i.e., loss amplification at higher losses) are the two main causes for their occurrence.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.