Abstract:Statistical descriptions of earthquakes offer important probabilistic information, and newly emerging technologies of high-precision observations and machine learning collectively advance our knowledge regarding complex earthquake behaviors. Still, there remains a formidable knowledge gap for predicting individual large earthquakes’ locations and magnitudes. Here, this study shows that the individual large earthquakes may have unique signatures that can be represented by new high-dimensional features—Gauss cur… Show more
“…The reference volume is defined as a discretized volume of the Earth lithosphere with increment of (longitude, latitude, depth), . For the convolution process, the geodetic coordinates, are transformed into the earth-centered rectilinear coordinate 17 . Each observed EQ’s moment magnitude are assumed to reside at , following the “point source” concept.…”
Section: Resultsmentioning
confidence: 99%
“…The second data transformation is to convert the spatio-temporal IIs into the pseudo physics quantities. Amongst many physics quantities, the best-so-far set of pseudo physics quantities are identified as {released energy, power, vorticity, Laplacian} 17 . To be purely data-driven, no pre-defined statistical or empirical laws are used.…”
Section: Resultsmentioning
confidence: 99%
“…Any mathematical form can be used as LF, and a simple yet general exponential form LF works well 17 , i.e., for the pseudo released energy = where . The pseudo “vorticity” is generated by where corresponds to the pseudo “power” and the pseudo “Laplacian” is calculated as where means the spatial gradient with respect to the geodetic coordinate system ( ).…”
Section: Resultsmentioning
confidence: 99%
“…As shown in Ref. 17 , amongst many pseudo physics quantities and their combinations, ML selected out the four quantities—the released energy, power, the first vorticity term, and the first Laplacian term, ( ), at least for the western U.S. region. Again, this selection is purely data-driven since ML simply seeks to find the best combination that can outperform other cases without any prejudice.…”
Section: Resultsmentioning
confidence: 99%
“…This study seeks to add a new dimension to this daunting question. The author’s prior study 17 shows that, after multi-layered data transformations, individual large EQs appear to have unique signatures that can be represented by new high-dimensional features. In particular, the observed EQ catalog data are transformed via spatio-temporal convolution, and then further transformed into a number of pseudo physics quantities (i.e., energy, power, vorticity, and Laplacian).…”
Predicting individual large earthquakes (EQs)’ locations, magnitudes, and timing remains unreachable. The author’s prior study shows that individual large EQs have unique signatures obtained from multi-layered data transformations. Via spatio-temporal convolutions, decades-long EQ catalog data are transformed into pseudo-physics quantities (e.g., energy, power, vorticity, and Laplacian), which turn into surface-like information via Gauss curvatures. Using these new features, a rule-learning machine learning approach unravels promising prediction rules. This paper suggests further data transformation via Fourier transformation (FT). Results show that FT-based new feature can help sharpen the prediction rules. Feasibility tests of large EQs ($$M\ge$$
M
≥
6.5) over the past 40 years in the western U.S. show promise, shedding light on data-driven prediction of individual large EQs. The handshake among ML methods, Fourier, and Gauss may help answer the long-standing enigma of seismogenesis.
“…The reference volume is defined as a discretized volume of the Earth lithosphere with increment of (longitude, latitude, depth), . For the convolution process, the geodetic coordinates, are transformed into the earth-centered rectilinear coordinate 17 . Each observed EQ’s moment magnitude are assumed to reside at , following the “point source” concept.…”
Section: Resultsmentioning
confidence: 99%
“…The second data transformation is to convert the spatio-temporal IIs into the pseudo physics quantities. Amongst many physics quantities, the best-so-far set of pseudo physics quantities are identified as {released energy, power, vorticity, Laplacian} 17 . To be purely data-driven, no pre-defined statistical or empirical laws are used.…”
Section: Resultsmentioning
confidence: 99%
“…Any mathematical form can be used as LF, and a simple yet general exponential form LF works well 17 , i.e., for the pseudo released energy = where . The pseudo “vorticity” is generated by where corresponds to the pseudo “power” and the pseudo “Laplacian” is calculated as where means the spatial gradient with respect to the geodetic coordinate system ( ).…”
Section: Resultsmentioning
confidence: 99%
“…As shown in Ref. 17 , amongst many pseudo physics quantities and their combinations, ML selected out the four quantities—the released energy, power, the first vorticity term, and the first Laplacian term, ( ), at least for the western U.S. region. Again, this selection is purely data-driven since ML simply seeks to find the best combination that can outperform other cases without any prejudice.…”
Section: Resultsmentioning
confidence: 99%
“…This study seeks to add a new dimension to this daunting question. The author’s prior study 17 shows that, after multi-layered data transformations, individual large EQs appear to have unique signatures that can be represented by new high-dimensional features. In particular, the observed EQ catalog data are transformed via spatio-temporal convolution, and then further transformed into a number of pseudo physics quantities (i.e., energy, power, vorticity, and Laplacian).…”
Predicting individual large earthquakes (EQs)’ locations, magnitudes, and timing remains unreachable. The author’s prior study shows that individual large EQs have unique signatures obtained from multi-layered data transformations. Via spatio-temporal convolutions, decades-long EQ catalog data are transformed into pseudo-physics quantities (e.g., energy, power, vorticity, and Laplacian), which turn into surface-like information via Gauss curvatures. Using these new features, a rule-learning machine learning approach unravels promising prediction rules. This paper suggests further data transformation via Fourier transformation (FT). Results show that FT-based new feature can help sharpen the prediction rules. Feasibility tests of large EQs ($$M\ge$$
M
≥
6.5) over the past 40 years in the western U.S. show promise, shedding light on data-driven prediction of individual large EQs. The handshake among ML methods, Fourier, and Gauss may help answer the long-standing enigma of seismogenesis.
Nature finds a way to leverage nanotextures to achieve desired functions. Recent advances in nanotechnologies endow fascinating multi-functionalities to nanotextures by modulating the nanopixel’s height. But nanoscale height control is a daunting task involving chemical and/or physical processes. As a facile, cost-effective, and potentially scalable remedy, the nanoscale capillary force lithography (CFL) receives notable attention. The key enabler is optical pre-modification of photopolymer’s characteristics via ultraviolet (UV) exposure. Still, the underlying physics of the nanoscale CFL is not well understood, and unexplained phenomena such as the “forbidden gap” in the nano capillary rise (unreachable height) abound. Due to the lack of large data, small length scales, and the absence of first principles, direct adoptions of machine learning or analytical approaches have been difficult. This paper proposes a hybrid intelligence approach in which both artificial and human intelligence coherently work together to unravel the hidden rules with small data. Our results show promising performance in identifying transparent, physics-retained rules of air diffusivity, dynamic viscosity, and surface tension, which collectively appear to explain the forbidden gap in the nanoscale CFL. This paper promotes synergistic collaborations of humans and AI for advancing nanotechnology and beyond.
The scientific community has been looking for novel approaches to develop nanostructures inspired by nature. However, due to the complicated processes involved, controlling the height of these nanostructures is challenging. Nanoscale capillary force lithography (CFL) is one way to use a photopolymer and alter its properties by exposing it to ultraviolet radiation. Nonetheless, the working mechanism of CFL is not fully understood due to a lack of enough information and first principles. One of these obscure behaviors is the sudden jump phenomenon—the sudden change in the height of the photopolymer depending on the UV exposure time and height of nano-grating (based on experimental data). This paper uses known physical principles alongside artificial intelligence to uncover the unknown physical principles responsible for the sudden jump phenomenon. The results showed promising results in identifying air diffusivity, dynamic viscosity, surface tension, and electric potential as the previously unknown physical principles that collectively explain the sudden jump phenomenon.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.