A new diagnostic for measuring the ability of atmospheric models to reproduce realistic low-frequency variability is introduced in the context of Held and Suarez's 1994 proposal for comparing the dynamics of different general circulation models. A simple procedure to compute , the e-folding time scale of the annular mode autocorrelation function, is presented. This quantity concisely quantifies the strength of low-frequency variability in a model and is easy to compute in practice. The sensitivity of to model numerics is then studied for two dry primitive equation models driven with the Held-Suarez forcings: one pseudospectral and the other finite volume. For both models, is found to be unrealistically large when the horizontal resolutions are low, such as those that are often used in studies in which long integrations are needed to analyze model variability on low frequencies. More surprising is that it is found that, for the pseudospectral model, is particularly sensitive to vertical resolution, especially with a triangular truncation at wavenumber 42 (a very common resolution choice). At sufficiently high resolution, the annular mode autocorrelation time scale in both models appears to converge around values of 20-25 days, suggesting the existence of an intrinsic time scale at which the extratropical jet vacillates in the Held and Suarez system. The importance of for computing the correct response of a model to climate change is explicitly demonstrated by perturbing the pseudospectral model with simple torques. The amplitude of the model's response to external forcing increases as increases, as suggested by the fluctuation-dissipation theorem.
This manuscript describes a technique for computing partial rankrevealing factorizations, such as, e.g, a partial QR factorization or a partial singular value decomposition. The method takes as input a tolerance ε and an m × n matrix A, and returns an approximate low rank factorization of A that is accurate to within precision ε in the Frobenius norm (or some other easily computed norm). The rank k of the computed factorization (which is an output of the algorithm) is in all examples we examined very close to the theoretically optimal ε-rank. The proposed method is inspired by the Gram-Schmidt algorithm, and has the same O(mnk) asymptotic flop count. However, the method relies on randomized sampling to avoid column pivoting, which allows it to be blocked, and hence accelerates practical computations by reducing communication. Numerical experiments demonstrate that the accuracy of the scheme is for every matrix that was tried at least as good as column-pivoted QR, and is sometimes much better. Computational speed is also improved substantially, in particular on GPU architectures.
International audienceWe propose a class of spherical wavelet bases for the analysis of geophysical models and for the tomographic inversion of global seismic data. Its multiresolution character allows for modelling with an effective spatial resolution that varies with position within the Earth. Our procedure is numerically efficient and can be implemented with parallel computing. We discuss two possible types of discrete wavelet transforms in the angular dimension of the cubed sphere. We describe benefits and drawbacks of these constructions and apply them to analyse the information in two published seismic wave speed models of the mantle, using the statistics of wavelet coefficients across scales. The localization and sparsity properties of wavelet bases allow finding a sparse solution to inverse problems by iterative minimization of a combination of the ℓ2 norm of the data residuals and the ℓ1 norm of the model wavelet coefficients. By validation with realistic synthetic experiments we illustrate the likely gains from our new approach in future inversions of finite-frequency seismic dat
Matrix decompositions are fundamental tools in the area of applied mathematics, statistical computing, and machine learning. In particular, low-rank matrix decompositions are vital, and widely used for data analysis, dimensionality reduction, and data compression. Massive datasets, however, pose a computational challenge for traditional algorithms, placing significant constraints on both memory and processing power. Recently, the powerful concept of randomness has been introduced as a strategy to ease the computational load. The essential idea of probabilistic algorithms is to employ some amount of randomness in order to derive a smaller matrix from a high-dimensional data matrix. The smaller matrix is then used to compute the desired low-rank approximation. Such algorithms are shown to be computationally efficient for approximating matrices with low-rank structure. We present the R package rsvd, and provide a tutorial introduction to randomized matrix decompositions. Specifically, randomized routines for the singular value decomposition, (robust) principal component analysis, interpolative decomposition, and CUR decomposition are discussed. Several examples demonstrate the routines, and show the computational advantage over other methods implemented in R.
The manuscript describes efficient algorithms for the computation of the CUR and ID decompositions. The methods used are based on simple modifications to the classical truncated pivoted QR decomposition, which means that highly optimized library codes can be utilized for implementation. For certain applications, further acceleration can be attained by incorporating techniques based on randomized projections. Numerical experiments demonstrate advantageous performance compared to existing techniques for computing CUR factorizations.
[1] We present a realistic application of an inversion scheme for global seismic tomography that uses as prior information the sparsity of a solution, defined as having few nonzero coefficients under the action of a linear transformation. In this paper, the sparsifying transform is a wavelet transform. We use an accelerated iterative soft-thresholding algorithm for a regularization strategy, which produces sparse models in the wavelet domain. The approach and scheme we present may be of use for preserving sharp edges in a tomographic reconstruction and minimizing the number of features in the solution warranted by the data. The method is tested on a data set of time delays for finite-frequency tomography using the USArray network, the first application in global seismic tomography to real data. The approach presented should also be suitable for other imaging problems. From a comparison with a more traditional inversion using damping and smoothing constraints, we show that (1) we generally retrieve similar features, (2) fewer nonzero coefficients under a properly chosen representation (such as wavelets) are needed to explain the data at the same level of root-mean-square misfit, (3) the model is sparse or compressible in the wavelet domain, and (4) we do not need to construct a heterogeneous mesh to capture the available resolution.
Abstract:A forecasting methodology for prediction of both normal prices and price spikes in the day-ahead energy market is proposed. The method is based on an iterative strategy implemented as a combination of two modules separately applied for normal price and price spike predictions. The normal price module is a mixture of wavelet transform, linear AutoRegressive Integrated Moving Average (ARIMA) and nonlinear neural network models. The probability of a price spike occurrence is produced by a compound classifier in which three single classification techniques are used jointly to make a decision. Combined with the spike value prediction technique, the output from the price spike module aims to provide a comprehensive price spike forecast. The overall electricity price forecast is formed as combined normal price and price spike forecasts. The forecast accuracy of the proposed method is evaluated with real data from the Finnish Nord Pool Spot day-ahead energy market. The proposed method provides significant improvement in both normal price and price spike prediction accuracy compared with some of the most popular forecast techniques applied for case studies of energy markets.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.