Abstract. Data assimilation, commonly used in weather forecasting, means combining a mathematical forecast of a target dynamical system with simultaneous measurements from that system in an optimal fashion. We demonstrate the benefits obtainable from data assimilation with a dam break flume simulation in which a shallow-water equation model is complemented with wave meter measurements. Data assimilation is conducted with a Variational Ensemble Kalman Filter (VEnKF) algorithm. The resulting dynamical analysis of the flume displays turbulent behavior, features prominent hydraulic jumps and avoids many numerical artifacts present in a pure simulation.
SUMMARYDecoupled implementation of data assimilation methods has been rarely studied. The variational ensemble Kalman filter has been implemented such that it needs not to communicate directly with the model, but only through input and output devices. In this work, an open multi-functional three-dimensional (3D) model, the coupled hydrodynamical-ecological model for regional and shelf seas (COHERENS), has been used. Assimilation of the total suspended matter (TSM) is carried out in 154 km 2 lake Säkylän Pyhäjärvi. Observations of TSM were derived from high-resolution satellite images of turbidity and chrolophyll-a. For demonstrating the method, we have used a low-resolution model grid of 1 km. The model was run for a period from May 16 to September 14. We have run the COHERENS model with two-dimensional (2D) mode time steps and 3D mode time steps. This allows COHERENS to switch between 2D and 3D modes in a single run for computational efficiency. We have noticed that there is not much difference between these runs. This is because satellite images depict the derived TSM for the surface layer only. The use of additional 3D data might change this conclusion and improve the results. We have found that in this study, the use of a large ensemble size does not guarantee higher performance. The successful implementation of decoupled variational ensemble Kalman filter method opens the way for other methods and evolution models to enjoy the benefits without having to spend substantial effort in merging the model and assimilation codes together, which can be a difficult task.
Abstract. The Variational Ensemble Kalman Filter (VEnKF), a recent data assimilation method that combines a variational assimilation of the Bayesian estimation problem with an ensemble of forecasts, is demonstrated in two-dimensional geophysical flows using a Quasi-Geostrophic (QG) model and a shallow water model. Using a synthetic experiment, a two layer QG model with model bias is solved on a cylindrical 40 x 20 domain. The performance of VEnKF on the QG model with increasing ensemble size is compared with the classical Extended Kalman Filter (EKF). It is shown that although convergence can be achieved with just 20 ensemble members, increasing the number of members results in a better estimate that approaches the one produced by EKF. In the second test case, a 2-D shallow water model is described using a real dam-break experiment. The VEnKF algorithm was used to assimilate observations obtained from a modified laboratory dam-break experiment with a two-dimensional setup of sensors at the downstream end. The wave meters are placed parallel to the direction of the flow alongside the flume walls to capture both cross flow and stream flow. In both test cases, VEnKF was able to predict genuinely two-dimensional flow patterns when the sensors had a two-dimensional geometry and was stable against model bias in the first test case. In the second test case, the experiments are complemented with an empirical study of the impact of observation interpolation on the stability of the VEnKF filter. In this study, a novel Courant–Friedrichs–Lewy type filter stability condition is observed that relates ensemble variance to the time interpolation distance between observations. The results of the two experiments shows that VEnKF is a good candidate for data assimilation problems and can be implemented in higher dimensional nonlinear models.
Through-the-wall radar imaging (TWRI) has attracted a great deal of attention in several sensitive applications, including rescue missions and military operations. Notwithstanding its broad range of applications, TWRI suffers from path-loss because distant targets experience more attenuation of signal power than those closer to the transceiver. This challenge may lead to missed targets with important information necessary for analysis and informed decision making. Responding to the challenge, we have developed a signal model with an effective path-loss compensator incorporating a free space exponent. Furthermore, multipath exploitation and compressive sensing techniques were employed to develop an effective algorithm for isolating residual clutter that may corrupt real targets. The proposed signal model integrates contributions from the front wall, multipath returns, and path-loss. Compared with the state-of-the-art model under the same experimental conditions, simulation results show that the proposed model achieves improved signal-to-clutter ratio, relative clutter peak, and probability of detection by 13.1%, 17.4% and 33.6%, respectively, suggesting that our model can represent the scene more accurately.
Compressed sensing allows recovery of image signals using a portion of data – a technique that has drastically revolutionized the field of through-the-wall radar imaging (TWRI). This technique can be accomplished through nonlinear methods, including convex programming and greedy iterative algorithms. However, such (nonlinear) methods increase the computational cost at the sensing and reconstruction stages, thus limiting the application of TWRI in delicate practical tasks (e.g. military operations and rescue missions) that demand fast response times. Motivated by this limitation, the current work introduces the use of a numerical optimization algorithm, called Limited Memory Broyden–Fletcher–Goldfarb–Shanno (LBFGS), to the TWRI framework to lower image reconstruction time. LBFGS, a well-known Quasi-Newton algorithm, has traditionally been applied to solve large scale optimization problems. Despite its potential applications, this algorithm has not been extensively applied in TWRI. Therefore, guided by LBFGS and using the Euclidean norm, we employed the regularized least square method to solve the cost function of the TWRI problem. Simulation results show that our method reduces the computational time by 87% relative to the classical method, even under situations of increased number of targets or large data volume. Moreover, the results show that the proposed method remains robust when applied to noisy environment.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.