The 2014 Working Group on California Earthquake Probabilities (WGCEP14) present the time-independent component of the Uniform California Earthquake Rupture Forecast, Version 3 (UCERF3), which provides authoritative estimates of the magnitude, location, and time-averaged frequency of potentially damaging earthquakes in California. The primary achievements have been to relax fault segmentation and include multifault ruptures, both limitations of UCERF2. The rates of all earthquakes are solved for simultaneously and from a broader range of data, using a system-level inversion that is both conceptually simple and extensible. The inverse problem is large and underdetermined, so a range of models is sampled using an efficient simulated annealing algorithm. The approach is more derivative than prescriptive (e.g., magnitude-frequency distributions are no longer assumed), so new analysis tools were developed for exploring solutions. Epistemic uncertainties were also accounted for using 1440 alternative logic-tree branches, necessitating access to supercomputers. The most influential uncertainties include alternative deformation models (fault slip rates), a new smoothed seismicity algorithm, alternative values for the total rate of M w ≥ 5 events, and different scaling relationships, virtually all of which are new. As a notable first, three deformation models are based on kinematically consistent inversions of geodetic and geologic data, also providing slip-rate constraints on faults previously excluded due to lack of geologic data. The grand inversion constitutes a system-level framework for testing hypotheses and balancing the influence of different experts. For example, we demonstrate serious challenges with the Gutenberg-Richter hypothesis for individual faults. UCERF3 is still an approximation of the system, however, and the range of models is limited (e.g., constrained to stay close to UCERF2). Nevertheless, UCERF3 removes the apparent UCERF2 overprediction of M 6.5-7 earthquake rates and also includes types of multifault ruptures seen in nature. Although UCERF3 fits the data better than UCERF2 overall, there may be areas that warrant further site-specific investigation. Supporting products may be of general interest, and we list key assumptions and avenues for future model improvements. Manuscript OrganizationBecause of manuscript length and model complexity, we begin with an outline of this report to help readers navigate the various sections:
The majority of earthquakes are aftershocks, yet aftershock physics is not well understood. Many studies suggest that static stress changes trigger aftershocks, but recent work suggests that shaking (dynamic stresses) may also play a role. Here we measure the decay of aftershocks as a function of distance from magnitude 2-6 mainshocks in order to clarify the aftershock triggering process. We find that for short times after the mainshock, when low background seismicity rates allow for good aftershock detection, the decay is well fitted by a single inverse power law over distances of 0.2-50 km. The consistency of the trend indicates that the same triggering mechanism is working over the entire range. As static stress changes at the more distant aftershocks are negligible, this suggests that dynamic stresses may be triggering all of these aftershocks. We infer that the observed aftershock density is consistent with the probability of triggering aftershocks being nearly proportional to seismic wave amplitude. The data are not fitted well by models that combine static stress change with the evolution of frictionally locked faults.
[1] There is strong observational evidence that the 1999 M W 7.1 Hector Mine earthquake in the Mojave Desert, California, was triggered by the nearby 1992 M W 7.3 Landers earthquake. Many authors have proposed that the Landers earthquake directly stressed the Hector Mine fault. Our model of the Landers aftershock sequence, however, suggests there is an 85% chance that the Hector Mine hypocenter was actually triggered by a chain of smaller earthquakes that was initiated by the Landers main shock. We perform our model simulations using the Monte Carlo method based on the Gutenberg-Richter relationship, Omori's law, Båth's law, and assumptions that all earthquakes, including aftershocks, are capable of producing aftershocks and that aftershocks produce their own aftershocks at the same rate that other earthquakes do. In general, our simulations show that if it has been more than several days since an M ! 7 main shock, most new aftershocks will be the result of secondary triggering. These secondary aftershocks are not physically constrained to occur where the original main shock increased stress. This may explain the significant fraction of aftershocks that have been found to occur in main shock stress shadows in static Coulomb stress triggering studies.
We demonstrate that the statistics of earthquake data in the global Centroid Moment Tensor (CMT) and National Earthquake Information Center (NEIC) catalogs and local California Council of the National Seismic System (CNSS) catalog are consistent with the idea that a single physical triggering mechanism is responsible for the occurrence of aftershocks, foreshocks, and multiplets. Specifically, we test the hypothesis that tectonic earthquakes usually show clustering only as a result of an initial earthquake triggering subsequent ones and that the magnitude of each triggered earthquake is entirely independent of the magnitude of the triggering earthquake. Therefore a certain percentage of the time, as determined by the GutenbergRichter magnitude-frequency relationship, an earthquake should by chance be larger than or comparable in size to the earthquake that triggered it. This hypothesis predicts that the number of times foreshocks or multiplets are observed should be a fixed fraction of the number of aftershock observations. We find that this is indeed the case in the global CMT and NEIC catalogs; the average ratios between foreshock, aftershock, and multiplet rates are consistent with what would be predicted by the Gutenberg-Richter relationship with b ס 1. We give special attention to the Solomon Islands, where it has been claimed that unique fault structures lead to unusually high numbers of multiplets. We use Monte Carlo trials to demonstrate that the Solomon Islands multiplets may be explained simply by a high regional aftershock rate and earthquake density. We also verify our foreshock results from the more complete recordings of small earthquakes available in the California catalog and find that foreshock rates for a wide range of foreshock and mainshock magnitudes can be projected from aftershock rates using the Gutenberg-Richter relationship with b ס 1 and the relationship that the number of earthquakes triggered varies with triggering earthquake magnitude M as c10 ␣M , where c is a productivity constant and ␣ is equal to 1. Finally, we test an alternative model that proposes that foreshocks do not trigger their mainshocks but are instead triggered by the mainshock nucleation phase. In this model, the nucleation phase varies with mainshock magnitude, so we would expect mainshock magnitude to be correlated with the magnitude, number, or spatial extent of the foreshocks. We find no evidence for any of these correlations.
The 2007 Working Group on California Earthquake Probabilities (WGCEP, 2007) presents the Uniform California Earthquake Rupture Forecast, Version 2 (UCERF 2). This model comprises a time-independent (Poisson-process) earthquake rate model, developed jointly with the National Seismic Hazard Mapping Program and a time-dependent earthquake-probability model, based on recent earthquake rates and stress-renewal statistics conditioned on the date of last event. The models were developed from updated statewide earthquake catalogs and fault deformation databases using a uniform methodology across all regions and implemented in the modular, extensible Open Seismic Hazard Analysis framework. The rate model satisfies integrating measures of deformation across the plate-boundary zone and is consistent with historical seismicity data. An overprediction of earthquake rates found at intermediate magnitudes (6:5 ≤ M ≤ 7:0) in previous models has been reduced to within the 95% confidence bounds of the historical earthquake catalog. A logic tree with 480 branches represents the epistemic uncertainties of the full time-dependent model. The mean UCERF 2 time-dependent probability of one or more M ≥ 6:7 earthquakes in the California region during the next 30 yr is 99.7%; this probability decreases to 46% for M ≥ 7:5 and to 4.5% for M ≥ 8:0. These probabilities do not include the Cascadia subduction zone, largely north of California, for which the estimated 30 yr, M ≥ 8:0 time-dependent probability is 10%. The M ≥ 6:7 probabilities on major strike-slip faults are consistent with the WGCEP (2003) study in the San Francisco Bay Area and the WGCEP (1995) study in southern California, except for significantly lower estimates along the San Jacinto and Elsinore faults, owing to provisions for larger multisegment ruptures. Important model limitations are discussed.
Abstract. On June 2, 1994, a large subduction thrust earthquake (Ms 7.2) produced a devastating tsunami on the island of Java. This earthquake had a number of unusual characteristics. It was the first recorded large thrust earthquake on the Java subduction zone. All of the aftershock mechanisms exhibit normal faulting; no mechanisms are similar to the main shock. Also, the large tsunami and the relatively low energy radiated by the main shock have led to suggestions that this earthquake might have involved slow, shallow rupture near the trench, similar to the 1992 Nicaragua earthquake. We first relocate the main shock and the aftershocks. We then invert long-period surface waves and broadband body waves to determine the depth and spatial distribution of the main shock slip. A dip of 12 ø, hypocenter depth of 16 km and moment of 3.5x102ø N rn (Mw 7.6) give the best fit to the combined seismic data and are consistent with the plate interface geometry. The source spectrum obtained from both body and surface waves has a single comer frequency (between 10 and 20 mHz) implying a stress drop of--0.3 MPa. The main energy release was preceded by a small subevent lasting -o12 s. The main slip occurred at --20 km depth, downdip and to the NW of the hypocenter. This area of slip is collocated with a prominent high in the bathymetry that has been identified as a subducting seamount. We interpret the Java earthquake as slip over this subducting seamount, which is a locked patch in an otherwise decoupled subduction zone. We find no evidence for slow, shallow rupture. No thrust aftershocks are expected if the entire locked zone slipped during the main shock, but extension of the subducting plate behind the seamount would promote normal faulting as observed. It seems probable that such a source model could also explain the size and timing of the observed tsunami.
The 2014 Working Group on California Earthquake Probabilities (WGCEP 2014) presents time-dependent earthquake probabilities for the third Uniform California Earthquake Rupture Forecast (UCERF3). Building on the UCERF3 time-independent model published previously, renewal models are utilized to represent elasticrebound-implied probabilities. A new methodology has been developed that solves applicability issues in the previous approach for unsegmented models. The new methodology also supports magnitude-dependent aperiodicity and accounts for the historic open interval on faults that lack a date-of-last-event constraint. Epistemic uncertainties are represented with a logic tree, producing 5760 different forecasts. Results for a variety of evaluation metrics are presented, including logic-tree sensitivity analyses and comparisons to the previous model (UCERF2). For 30 yr M ≥ 6:7 probabilities, the most significant changes from UCERF2 are a threefold increase on the Calaveras fault and a threefold decrease on the San Jacinto fault. Such changes are due mostly to differences in the time-independent models (e.g., fault-slip rates), with relaxation of segmentation and inclusion of multifault ruptures being particularly influential. In fact, some UCERF2 faults were simply too long to produce M 6.7 size events given the segmentation assumptions in that study. Probability model differences are also influential, with the implied gains (relative to a Poisson model) being generally higher in UCERF3. Accounting for the historic open interval is one reason. Another is an effective 27% increase in the total elastic-rebound-model weight. The exact factors influencing differences between UCERF2 and UCERF3, as well as the relative importance of logic-tree branches, vary throughout the region and depend on the evaluation metric of interest. For example, M ≥ 6:7 probabilities may not be a good proxy for other hazard or loss measures. This sensitivity, coupled with the approximate nature of the model and known limitations, means the applicability of UCERF3 should be evaluated on a case-by-case basis.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.