A sensitivity study is undertaken to assess the utility of different onshore digital elevation models (DEMs) for simulating the extent of tsunami inundation using case studies from two locations in Indonesia. We compare airborne IFSAR, ASTER, and SRTM against high resolution LiDAR and stereo-camera data in locations with different coastal morphologies. Tsunami inundation extents modeled with airborne IFSAR DEMs are comparable with those modeled with the higher resolution datasets and are also consistent with historical run-up data, where available. Large vertical errors and poor resolution of the coastline in the ASTER and SRTM elevation datasets cause the modeled inundation extent to be much less compared with the other datasets and observations. Therefore, ASTER and SRTM should not be used to underpin tsunami inundation models. A model mesh resolution of 25 m was sufficient for estimating the inundated area when using elevation data with high vertical accuracy in the case studies presented here. Differences in modeled inundation between digital terrain models (DTM) and digital surface models (DSM) for LiDAR and IFSAR are greater than differences between the two data types. Models using DTM may overestimate inundation while those using DSM may underestimate inundation when a constant Manning's roughness value is used. We recommend using DTM for modeling tsunami inundation extent with further work needed to resolve the scale at which surface roughness should be parameterized.
We have previously developed a tsunami source inversion method based on “Time Reverse Imaging” and demonstrated that it is computationally very efficient and has the ability to reproduce the tsunami source model with good accuracy using tsunami data of the 2011 Tohoku earthquake tsunami. In this paper, we implemented this approach in the 2009 Samoa earthquake tsunami triggered by a doublet earthquake consisting of both normal and thrust faulting. Our result showed that the method is quite capable of recovering the source model associated with normal and thrust faulting. We found that the inversion result is highly sensitive to some stations that must be removed from the inversion. We applied an adjoint sensitivity method to find the optimal set of stations in order to estimate a realistic source model. We found that the inversion result is improved significantly once the optimal set of stations is used. In addition, from the reconstructed source model we estimated the slip distribution of the fault from which we successfully determined the dipping orientation of the fault plane for the normal fault earthquake. Our result suggests that the fault plane dip toward the northeast.
This paper considers the importance of model parameterization, including dispersion, source kinematics, and source discretization, in tsunami source inversion. We implement single and multiple time window methods for dispersive and nondispersive wave propagation to estimate source models for the tsunami generated by the 2011 Tohoku‐Oki earthquake. Our source model is described by sea surface displacement instead of fault slip, since sea surface displacement accounts for various tsunami generation mechanisms in addition to fault slip. The results show that tsunami source models can strongly depend on such model choices, particularly when high‐quality, open‐ocean tsunami waveform data are available. We carry out several synthetic inversion tests to validate the method and assess the impact of parameterization including dispersion and variable rupture velocity in data predictions on the inversion results. Although each of these effects has been considered separately in previous studies, we show that it is important to consider them together in order to obtain more meaningful inversion results. Our results suggest that the discretization of the source, the use of dispersive waves, and accounting for source kinematics are all important factors in tsunami source inversion of large events such as the Tohoku‐Oki earthquake, particularly when an extensive set of high‐quality tsunami waveform recordings are available. For the Tohoku event, a dispersive model with variable rupture velocity results in a profound improvement in waveform fits that justify the higher source complexity and provide a more realistic source model.
This paper describes a new method for forecasting far‐field tsunamis by combining aspects of least squares tsunami source inversion (LSQ) with time reverse imaging (TRI). This method has the same source representation as LSQ but uses TRI to estimate initial sea surface displacement. We apply this method to the 2011 Japan tsunami, and the results show that the method produces tsunami waveforms of excellent agreement with observed waveforms at both near‐ and far‐field stations not used in the source estimation. The spatial distribution of cumulative sea surface displacement agrees well with other models obtained in more sophisticated inversions, but resolve source kinematics are not well resolved. The method has potential for application in tsunami warning systems, as it is computationally efficient and can be used to estimate the initial source model by applying precomputed Green's functions in order to provide more accurate and realistic tsunami predictions.
This paper studies the initial sea surface displacement and its uncertainty after an earthquake based on tsunami waveforms. The spatial distribution is inferred with a Bayesian approach that provides probabilities that are interpreted as uncertainties of the displaced sea surface. The parameterization is nonlinear and treats apparent rupture velocity as unknown but assumes rise time to be fixed at 30 s. Importantly, the spatial complexity of the source is constrained by observations using a transdimensional algorithm based on a wavelet decomposition of the displacement field. In this approach, the number of wavelet coefficients is an unknown random variable that is also estimated as part of the inversion. The resulting parameterization is parsimonious in that it can adapt to the spatially varying source complexity while being consistent with the information in the tsunami waveforms. In this way, the resolution of displacement varies across the source region with more parameters introduced for parts of the source that are resolved well by the data and/or have significant complexity. The noise level (standard deviation) at each gauge is initially treated as unknown to estimate data covariance matrices. These matrices are applied in subsequent inversion and include unknown scaling which eliminates the requirement to assume station weights and accounts for temporally correlated waveform noise. The method is applied to waveforms recorded during the 2011 Japan Tsunami and results show high resolution (low uncertainty) in most parts of the source region and a previously unreported level of source detail. In particular, the main peak of the source is elongated trench parallel and shows a well‐resolved bimodal finger‐like feature in the northern source region that closely follows the trench.
his article was published in the International Journal for Numerical Methods in Fluids [© 2011 John Wiley & Sons, Ltd.] and the definite version is available at : http://dx.doi.org/10.1002/fld.2545 The Journal's website is at:http://onlinelibrary.wiley.com/doi/10.1002/fld.2545/abstractAn observation sensitivity (OS) method to identify targeted observations is implemented in the context of four-dimensional variational (4D-Var) data assimilation. This methodology is compared with the well-established adjoint sensitivity (AS) method using a nonlinear Burgers equation as a test model. Automatic differentiation software is used to implement the first-order adjoint model (ADM) to calculate the gradient of the cost function required in the 4D-Var minimization algorithm and in the AS computations and the second-order ADM to obtain information on the Hessian matrix of the 4D-Var cost that is necessary in the OS computations. Numerical results indicate that the observation-targeting is particularly successful in reducing the forecast error for moderate Reynolds numbers. The potential benefits of the OS targeting approach over the AS are investigated. The effect of random perturbations on the performance of these adaptive observation techniques is also analyzed.Publishe
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.