We present the first-generation global tomographic model constructed based on adjoint tomography, an iterative full-waveform inversion technique. Synthetic seismograms were calculated using GPU-accelerated spectral-element simulations of global seismic wave propagation, accommodating effects due to 3-D anelastic crust & mantle structure, topography & bathymetry, the ocean load, ellipticity, rotation, and self-gravitation. Fréchet derivatives were calculated in 3-D anelastic models based on an adjoint-state method. The simulations were performed on the Cray XK7 named 'Titan', a computer with 18 688 GPU accelerators housed at Oak Ridge National Laboratory. The transversely isotropic global model is the result of 15 tomographic iterations, which systematically reduced differences between observed and simulated three-component seismograms. Our starting model combined 3-D mantle model S362ANI with 3-D crustal model Crust2.0. We simultaneously inverted for structure in the crust and mantle, thereby eliminating the need for widely used 'crustal corrections'. We used data from 253 earthquakes in the magnitude range 5.8 ≤ M w ≤ 7.0. We started inversions by combining ∼30 s body-wave data with ∼60 s surface-wave data. The shortest period of the surface waves was gradually decreased, and in the last three iterations we combined ∼17 s body waves with ∼45 s surface waves. We started using 180 min long seismograms after the 12th iteration and assimilated minor-and major-arc body and surface waves. The 15th iteration model features enhancements of well-known slabs, an enhanced image of the Samoa/Tahiti plume, as well as various other plumes and hotspots, such as Caroline, Galapagos, Yellowstone and Erebus. Furthermore, we see clear improvements in slab resolution along the Hellenic and Japan Arcs, as well as subduction along the East of Scotia Plate, which does not exist in the starting model. Point-spread function tests demonstrate that we are approaching the resolution of continentalscale studies in some areas, for example, underneath Yellowstone. This is a consequence of our multiscale smoothing strategy in which we define our smoothing operator as a function of the approximate Hessian kernel, thereby smoothing gradients less wherever we have good ray coverage, such as underneath North America.
One of the most critical challenges for high-performance computing (HPC) scientific visualization is execution on massively threaded processors. Of the many fundamental changes we are seeing in HPC systems, one of the most profound is a reliance on new processor types optimized for execution bandwidth over latency hiding. Our current production scientific visualization software is not designed for these new types of architectures. To address this issue, the VTK-m framework serves as a container for algorithms, provides flexible data representation, and simplifies the design of visualization algorithms on new and future computer architecture.
There are some important and motivating questions that drive the research for processing massive data sets, like will it be possible to use the simpler pure parallelism technique to process tomorrow's data? Can pure parallelism scale sufficiently to process massive data sets?To answer these questions, the researchers performed a series of experiments, originally published in IEEE Computer Graphics and Applications [2] and forming the basis of this report, that studied the scalability of pure parallelism in visualization software on massive data sets. These experiments utilized multiple visualization algorithms and were run on multiple architectures. There were two types of experiments performed. The first experiment examined performance at a massive scale: 16,000 or more cores and one trillion or more cells. The second experiment studied whether the approach can maintain a fixed amount of time to complete an operation when the data size is doubled and the amount of resources is doubled, also known as weak scalability. At the time of their original publication, these experiments represented the largest data set sizes ever published in visualization literature. Further, their findings still continue to contribute to the understanding of today's dominant processing paradigm (pure parallelism) on tomorrow's data, in the form of scaling characteristics and bottlenecks at high levels of concurrency and with very large data sets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.