Data assimilation algorithms combine a numerical model with observations in a quantitative way. For an optimal combination either variational minimization algorithms or ensemble-based estimation methods are applied.The computations of a data assimilation application are usually far more costly than a pure model integration. To cope with the large computational costs, a good scalability of the assimilation program is required. The ensemble-based methods have been shown to exhibit a particularly good scalability due to the natural parallelism inherent in the integration of an ensemble of model states. However, also the scalability of the estimation method -commonly based on the Kalman filter -is important. This study discusses implementation strategies for ensemble-based filter algorithms. Particularly efficient is a strong coupling between the model and the assimilation algorithm into a single executable program. The coupling can be performed with minimal changes to the numerical model itself and leads to a model with data assimilation extension. The scalability of the data assimilation system * Corresponding author.Email address: lars.nerger@awi.de (Lars Nerger)Manuscript accepted for publication in Computers & Geosciences March 30, 2012 is examined using the example of an implementation of an ocean circulation model with the Parallel Data Assimilation Framework (PDAF) into which synthetic sea surface height data are assimilated.
Particle filters contain the promise of fully nonlinear data assimilation. They have been applied in numerous science areas, including the geosciences, but their application to high‐dimensional geoscience systems has been limited due to their inefficiency in high‐dimensional systems in standard settings. However, huge progress has been made, and this limitation is disappearing fast due to recent developments in proposal densities, the use of ideas from (optimal) transportation, the use of localization and intelligent adaptive resampling strategies. Furthermore, powerful hybrids between particle filters and ensemble Kalman filters and variational methods have been developed. We present a state‐of‐the‐art discussion of present efforts of developing particle filters for high‐dimensional nonlinear geoscience state‐estimation problems, with an emphasis on atmospheric and oceanic applications, including many new ideas, derivations and unifications, highlighting hidden connections, including pseudo‐code, and generating a valuable tool and guide for the community. Initial experiments show that particle filters can be competitive with present‐day methods for numerical weather prediction, suggesting that they will become mainstream soon.
In recent years, several ensemble-based Kalman filter algorithms have been developed that have been classified as ensemble square root Kalman filters. Parallel to this development, the singular ''evolutive'' interpolated Kalman (SEIK) filter has been introduced and applied in several studies. Some publications note that the SEIK filter is an ensemble Kalman filter or even an ensemble square root Kalman filter. This study examines the relation of the SEIK filter to ensemble square root filters in detail. It shows that the SEIK filter is indeed an ensemble square root Kalman filter. Furthermore, a variant of the SEIK filter, the error subspace transform Kalman filter (ESTKF), is presented that results in identical ensemble transformations to those of the ensemble transform Kalman filter (ETKF), while having a slightly lower computational cost. Numerical experiments are conducted to compare the performance of three filters (SEIK, ETKF, and ESTKF) using deterministic and random ensemble transformations. The results show better performance for the ETKF and ESTKF methods over the SEIK filter as long as this filter is not applied with a symmetric square root. The findings unify the separate developments that have been performed for the SEIK filter and the other ensemble square root Kalman filters.
The impact of assimilating sea ice thickness data derived from ESA's Soil Moisture and Ocean Salinity (SMOS) satellite together with Special Sensor Microwave Imager/Sounder (SSMIS) sea ice concentration data of the National Snow and Ice Data Center (NSIDC) in a coupled sea ice-ocean model is examined. A period of 3 months from 1 November 2011 to 31 January 2012 is selected to assess the forecast skill of the assimilation system. The 24 h forecasts and longer forecasts are based on the Massachusetts Institute of Technology general circulation model (MITgcm), and the assimilation is performed by a localized Singular Evolutive Interpolated Kalman (LSEIK) filter. For comparison, the assimilation is repeated only with the SSMIS sea ice concentrations. By running two different assimilation experiments, and comparing with the unassimilated model, independent satellite-derived data, and in situ observation, it is shown that the SMOS ice thickness assimilation leads to improved thickness forecasts. With SMOS thickness data, the sea ice concentration forecasts also agree better with observations, although this improvement is smaller.
We present results for two colliding black holes (BHs), with angular momentum, spin, and unequal mass. For the first time, gravitational waveforms are computed for a grazing collision from a full 3D numerical evolution. The collision can be followed through the merger to form a single BH, and through part of the ringdown period of the final BH. The apparent horizon is tracked and studied, and physical parameters, such as the mass of the final BH, are computed. The total energy radiated in gravitational waves is shown to be consistent with the total initial mass of the spacetime and the apparent horizon mass of the final BH. DOI: 10.1103/PhysRevLett.87.271103 PACS numbers: 04.25.Dm, 04.30.Db, 95.30.Sf, 97.60.Lf The collision of two black holes (BHs) is considered by many researchers to be a primary candidate for generating detectable gravitational waves. As the first generation of gravitational wave detectors [1], with enough sensitivity to potentially detect waves, is coming online for the first time next year, the urgency of providing theoretical information needed not only to interpret, but also to detect the waves, is very high. However, even in axisymmetry, the problem has proven to be extremely difficult, requiring nearly 20 years to solve in even limited cases (e.g., [2 -5]). In full, 3D progress has been rather slow due to many factors, including (but not limited to) unexpected numerical instabilities, limited computer power, and the difficulties of dealing with spacetime singularities inside BHs. The first true 3D simulation of spinning and moving BHs was performed in [6]. In [6], the two BHs start out close to each other, much closer than the separation for the last stable orbit of a particle in the Schwarzschild spacetime, and the evolution proceeds through parts of the plunge and ring-down phase of a "grazing collision" within a very short time interval. The spacetime singularities are dealt with by a particular choice of coordinates, singularity avoiding slicing and vanishing shift. BH excision [7,8] has allowed improvements in the treatment of the spacetime singularities to the extent that highly accurate simulations of single BHs can be carried out [9][10][11][12] and recent applications to the grazing collision of BHs show promise [13]. One of the key limiting factors in the existing two approaches to the grazing collision is the achievable evolution time for which useful numerical data can be obtained, which due to numerical problems has been limited to 7M in [6], and to about 9M-15M in [13]. Here time is measured in units of the total Arnowitt-Deser-Misner (ADM) mass M of the system as opposed to using the bare mass m of one of the BHs.In this paper we consider singularity avoiding slicing as in [6]. We combine the application of a series of recently developed physics analysis tools and techniques with significant progress made in overcoming the problems mentioned above. Early, preliminary results from this series of simulations have been presented in [14,15], but we now provide the first de...
Ensemble Kalman filter methods are typically used in combination with one of two localization techniques. One technique is covariance localization, or direct forecast error localization, in which the ensemble-derived forecast error covariance matrix is Schur multiplied with a chosen correlation matrix. The second way of localization is by domain decomposition. Here, the assimilation is split into local domains in which the assimilation update is performed independently. Domain localization is frequently used in combination with filter algorithms that use the analysis error covariance matrix for the calculation of the gain like the ensemble transform Kalman filter (ETKF) and the singular evolutive interpolated Kalman filter (SEIK). However, since the local assimilations are performed independently, smoothness of the analysis fields across the subdomain boundaries becomes an issue of concern.To address the problem of smoothness, an algorithm is introduced that uses domain localization in combination with a Schur product localization of the forecast error covariance matrix for each local subdomain. On a simple example, using the Lorenz-40 system, it is demonstrated that this modification can produce results comparable to those obtained with direct forecast error localization. In addition, these results are compared to the method that uses domain localization in combination with weighting of observations. In the simple example, the method using weighting of observations is less accurate than the new method, particularly if the observation errors are small. Domain localization with weighting of observations is further examined in the case of assimilation of satellite data into the global finite-element ocean circulation model (FEOM) using the local SEIK filter. In this example, the use of observational weighting improves the accuracy of the analysis. In addition, depending on the correlation function used for weighting, the spectral properties of the solution can be improved.
Chlorophyll data from the Sea-viewing Wide Field-of-view Sensor (SeaWiFS) is assimilated into the three-dimensional global NASA Ocean Biogeochemical Model (NOBM) for the period 1998-2004 in order to obtain an improved representation of chlorophyll in the model. The assimilation is performed by the SEIK filter which is based on the Kalman filter algorithm. The filter is implemented to univariately correct the concentration of surface total chlorophyll. A localized filter analysis is used and the filter is simplified by using a static state error covariance matrix. The assimilation provides daily global surface chlorophyll fields and improves the chlorophyll estimates relative to a model simulation without assimilation. The comparison with independent in situ data over the seven years also shows a significant improvement of the chlorophyll estimate. The assimilation reduces the RMS log error of total chlorophyll from 0.43 to 0.32, while the RMS log error is 0.28 for the in situ data considered. That is, the global RMS log error of chlorophyll estimated by the model is reduced by the assimilation from 53% to 13% above the error of SeaWiFS. Regionally, the assimilation estimate exhibits smaller errors than SeaWiFS data in several oceanic basins.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.