A team of earthquake geologists, seismologists, and engineering seismologists has collectively produced an update of the national probabilistic seismic hazard (PSH) model for New Zealand (National Seismic Hazard Model, or NSHM). The new NSHM supersedes the earlier NSHM published in 2002 and used as the hazard basis for the New Zealand Loadings Standard and numerous other end-user applications. The new NSHM incorporates a fault source model that has been updated with over 200 new onshore and offshore fault sources and utilizes new New Zealand-based and international scaling relationships for the parameterization of the faults. The distributed seismicity model has also been updated to include post-1997 seismicity data, a new seismicity regionalization, and improved methodology for calculation of the seismicity parameters. Probabilistic seismic hazard maps produced from the new NSHM show a similar pattern of hazard to the earlier model at the national scale, but there are some significant reductions and increases in hazard at the regional scale. The national-scale differences between the new and earlier NSHM appear less than those seen between much earlier national models, indicating that some degree of consistency has been achieved in the national-scale pattern of hazard estimates, at least for return periods of 475 years and greater.Online Material: Table of fault source parameters for the 2010 national seismichazard model.
Despite a lack of reliable deterministic earthquake precursors, seismologists have significant predictive information about earthquake activity from an increasingly accurate understanding of the clustering properties of earthquakes. In the past 15 years, time-dependent earthquake probabilities based on a generic short-term clustering model have been made publicly available in near-real time during major earthquake sequences. These forecasts describe the probability and number of events that are, on average, likely to occur following a mainshock of a given magnitude, but are not tailored to the particular sequence at hand and contain no information about the likely locations of the aftershocks. Our model builds upon the basic principles of this generic forecast model in two ways: it recasts the forecast in terms of the probability of strong ground shaking, and it combines an existing time-independent earthquake occurrence model based on fault data and historical earthquakes with increasingly complex models describing the local time-dependent earthquake clustering. The result is a time-dependent map showing the probability of strong shaking anywhere in California within the next 24 hours. The seismic hazard modelling approach we describe provides a better understanding of time-dependent earthquake hazard, and increases its usefulness for the public, emergency planners and the media.
The Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced both to test the validity of their assumptions and explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but here we focus on statistical methods using future earthquake data only. We envision two evaluations: a self-consistency test, and comparison of every pair of models for relative consistency. Both tests are based on the likelihood ratio method, and both would be fully prospective (that is, the models are not adjusted to fit the test data). To be tested, each model must assign a probability or probability density to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified "bins" with location, magnitude, time and in some cases focal mechanism limits. IntroductionTo predict the behavior of a system is the desired proof of a model of this system. Seismology cannot predict earthquake occurence, however, it should seek for the best possible models to forecast earthquake occurence as precise as possible. This paper describes the rules of an experiment to examine or test earthquake forecasts in a statistical way. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which the models might be assigned weights in a future consensus model or be judged as suitable for particular areas.To test models against one another, we require that forecasts based on them can be expressed numerically in a standard format. That format is the average rate of earthquake occurrence within pre-specified limits of hypocentral latitude, longitude, magnitude, and time. For some source models there will also be 1
The five-year experiment of the Regional Earthquake Likelihood Models (RELM) working group was designed to compare several prospective forecasts of earthquake rates in latitude-longitude-magnitude bins in and around California. This forecast format is being used as a blueprint for many other earthquake predictability experiments around the world, and therefore it is important to consider how to evaluate the performance of such forecasts. Two tests that are currently used are based on the likelihood of the observed distribution of earthquakes given a forecast; one test compares the binned space-rate-magnitude observation and forecast, and the other compares only the rate forecast and the number of observed earthquakes. In this article, we discuss a subtle flaw in the current test of rate forecasts, and we propose two new tests that isolate the spatial and magnitude component, respectively, of a space-ratemagnitude forecast. For illustration, we consider the RELM forecasts and the distribution of earthquakes observed during the first half of the ongoing RELM experiment. We show that a space-rate-magnitude forecast may appear to be consistent with the distribution of observed earthquakes despite the spatial forecast being inconsistent with the spatial distribution of observed earthquakes, and we suggest that these new tests should be used to provide increased detail in earthquake forecast evaluation. We also discuss the statistical power of each of the likelihood-based tests and the stability (with respect to earthquake catalog uncertainties) of results from the likelihoodbased tests.
Coseismic canyon flushing reveals how earthquakes drive canyon development and deep-sea sediment dispersal on active margins.
[1] We perform a retrospective forecast experiment on the 1992 Landers sequence comparing the predictive power of commonly used model frameworks for short-term earthquake forecasting. We compare a modified short-term earthquake probability (STEP) model, six realizations of the epidemic-type aftershock sequence (ETAS) model, and four models that combine Coulomb stress changes calculations and rate-and-state theory to generate seismicity rates (CRS models). We perform the experiment under the premise of a controlled environment with predefined conditions for the testing region and data for all modelers. We evaluate the forecasts with likelihood tests to analyze spatial consistency and the total amount of forecasted events versus observed data. We find that (1) 9 of the 11 models perform superior compared to a simple reference model, (2) ETAS models forecast the spatial evolution of seismicity best and perform best in the entire test suite, (3) the modified STEP model matches best the total number of events, (4) CRS models can only compete with empirical statistical models by introducing stochasticity in these models considering uncertainties in the finite-fault source model, and (5) resolving Coulomb stress changes on 3-D optimally oriented planes is more adequate for forecasting purposes than using the specified receiver fault concept. We conclude that statistical models perform generally better than the tested physics-based models and parameter value updates using the occurrence of aftershocks generally improve the predictive power in particular for the purely statistical models in space and time.
Seismic hazard modeling is a multidisciplinary science that aims to forecast earthquake occurrence and its resultant ground shaking. Such models consist of a probabilistic framework that quantifies uncertainty across a complex system; typically, this includes at least two model components developed from Earth science: seismic source and ground motion models. Although there is no scientific prescription for the forecast length, the most common probabilistic seismic hazard analyses consider forecasting windows of 30 to 50 years, which are typically an engineering demand for building code purposes. These types of analyses are the topic of this review paper. Although the core methods and assumptions of seismic hazard modeling have largely remained unchanged for more than 50 years, we review the most recent initiatives, which face the difficult task of meeting both the increasingly sophisticated demands of society and keeping pace with advances in scientific understanding. A need for more accurate and spatially precise hazard forecasting must be balanced with increased quantification of uncertainty and new challenges such as moving from time‐independent hazard to forecasts that are time dependent and specific to the time period of interest. Meeting these challenges requires the development of science‐driven models, which integrate all information available, the adoption of proper mathematical frameworks to quantify the different types of uncertainties in the hazard model, and the development of a proper testing phase of the model to quantify its consistency and skill. We review the state of the art of the National Seismic Hazard Modeling and how the most innovative approaches try to address future challenges.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.