The earthquake size distribution follows, in most instances, a power law, with the slope of this power law, the 'b value', commonly used to describe the relative occurrence of large and small events (a high b value indicates a larger proportion of small earthquakes, and vice versa). Statistically significant variations of b values have been measured in laboratory experiments, mines and various tectonic regimes such as subducting slabs, near magma chambers, along fault zones and in aftershock zones. However, it has remained uncertain whether these differences are due to differing stress regimes, as it was questionable that samples in small volumes (such as in laboratory specimens, mines and the shallow Earth's crust) are representative of earthquakes in general. Given the lack of physical understanding of these differences, the observation that b values approach the constant 1 if large volumes are sampled was interpreted to indicate that b = 1 is a universal constant for earthquakes in general. Here we show that the b value varies systematically for different styles of faulting. We find that normal faulting events have the highest b values, thrust events the lowest and strike-slip events intermediate values. Given that thrust faults tend to be under higher stress than normal faults we infer that the b value acts as a stress meter that depends inversely on differential stress.
The Regional Earthquake Likelihood Models (RELM) project aims to produce and evaluate alternate models of earthquake potential (probability per unit volume, magnitude, and time) for California. Based on differing assumptions, these models are produced both to test the validity of their assumptions and explore which models should be incorporated in seismic hazard and risk evaluation. Tests based on physical and geological criteria are useful but here we focus on statistical methods using future earthquake data only. We envision two evaluations: a self-consistency test, and comparison of every pair of models for relative consistency. Both tests are based on the likelihood ratio method, and both would be fully prospective (that is, the models are not adjusted to fit the test data). To be tested, each model must assign a probability or probability density to any possible event within a specified region of space, time, and magnitude. For our tests the models must use a common format: earthquake rates in specified "bins" with location, magnitude, time and in some cases focal mechanism limits. IntroductionTo predict the behavior of a system is the desired proof of a model of this system. Seismology cannot predict earthquake occurence, however, it should seek for the best possible models to forecast earthquake occurence as precise as possible. This paper describes the rules of an experiment to examine or test earthquake forecasts in a statistical way. The primary purposes of the tests described below are to evaluate physical models for earthquakes, assure that source models used in seismic hazard and risk studies are consistent with earthquake data, and provide quantitative measures by which the models might be assigned weights in a future consensus model or be judged as suitable for particular areas.To test models against one another, we require that forecasts based on them can be expressed numerically in a standard format. That format is the average rate of earthquake occurrence within pre-specified limits of hypocentral latitude, longitude, magnitude, and time. For some source models there will also be 1
[1] The statistics of large earthquakes commonly involve large uncertainties due to the lack of long-term, robust earthquake recordings. Small-scale seismic events are abundant and can be used to examine variations in fault structure and stress. We report on the connection between stress and microseismic event statistics prior to the possibly smallest earthquakes: those generated in the laboratory. We investigate variations in seismic b value of acoustic emission events during the stress buildup and release on laboratory-created fault zones. We show that b values mirror periodic stress changes that occur during series of stick-slip events, and are correlated with stress over many seismic cycles. Moreover, the amount of b value increase associated with slip events indicates the extent of the corresponding stress drop. Consequently, b value variations can be used to approximate the stress state on a fault: a possible tool for the advancement of time-dependent seismic hazard assessment.
We present a new method for estimating earthquake detection probabilities that avoids assumptions about earthquake occurrence, for example, the event-size distribution, and uses only empirical data: phase data, station information, and network-specific attenuation relations. First, we determine the detection probability for each station as a function of magnitude and hypocentral distance, using data from past earthquakes. Second, we combine the detection probabilities of stations using a basic combinatoric procedure to determine the probability that a hypothetical earthquake with a given size and location could escape detection. Finally, we synthesize detection-probability maps for earthquakes of particular magnitudes and probability-based completeness maps. Because the method relies only on detection probabilities of stations, it can also be used to evaluate hypothetical additions or deletions of stations as well as scenario computations of a network crisis. The new approach has several advantages: completeness is analyzed as a function of network properties instead of earthquake samples; thus, no event-size distribution is assumed. Estimating completeness is becoming possible in regions of sparse data where methods based on parametric earthquake catalogs fail. We find that the catalog of the Southern California Seismic Network (SCSN) has, for most of the region, a lower magnitude of completeness than that computed using traditional techniques, although in some places traditional techniques provide lower estimates. The network reliably records earthquakes smaller than magnitude 1.0 in some places and 1.0 in the seismically active regions. However, it does not achieve the desired completeness of magnitude M L 1:8 everywhere in its authoritative region. A complete detection is achieved at M L 3:4 in the entire authoritative region; thus, at the boundaries, earthquakes as large as M L 3:3 might escape detection.
[1] Seismicity clusters within fault zones can be connected to the structure, geometric complexity and size of asperities which perturb and intensify the stress field in their periphery. To gain further insight into fault mechanical processes, we study stick-slip sequences in an analog, laboratory setting. Analysis of small scale fracture processes expressed by acoustic emissions (AEs) provide the possibility to investigate how microseismicity is linked to fault heterogeneities and the occurrence of dynamic slip events. The present work connects X-ray computer tomography (CT) scans of faulted rock samples with spatial maps of b values (slope of the frequency-magnitude distribution), seismic moments and event densities. Our current experimental setup facilitates the creation of a series of stick-slips on one fault plane thus allowing us to document how individual stick-slips can change the characteristics of AE event populations in connection to the evolution of the fault structure. We found that geometric asperities identified in CT scan images were connected to regions of low b values, increased event densities and moment release over multiple stick-slip cycles. Our experiments underline several parallels between laboratory findings and studies of crustal seismicity, for example, that asperity regions in lab and field are connected to spatial b value anomalies. These regions appear to play an important role in controlling the nucleation spots of dynamic slip events and crustal earthquakes.
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or “quakes”. We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects “tuned critical” behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.
On 28 September 2004 there was an earthquake of magnitude 6.0 at Parkfield, California. Here we show that the size distribution of the micro-earthquakes recorded in the decades before the main shock occurred allowed an accurate forecast of its eventual rupture area. Applying this approach to other well monitored faults should improve earthquake hazard assessment in future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.