In the literature on regularization, many different parameter choice methods have been proposed in both deterministic and stochastic settings. However, based on the available information, it is not always easy to know how well a particular method will perform in a given situation and how it compares to other methods. This paper reviews most of the existing parameter choice methods, and evaluates and compares them in a large simulation study for spectral cut-off and Tikhonov regularization. The test cases cover a wide range of linear inverse problems with both white and colored stochastic noise. The results show some marked differences between the methods, in particular, in their stability with respect to the noise and its type. We conclude with a table of properties of the methods and a summary of the simulation results, from which we identify the best methods.
Summary. Let f,x be the regularized solution of a general, linear operator equation, Kfo = g, from discrete, noisy data Yi = g(xi) + ei, i = 1 ..... n, where el are uncorrelated random errors. We consider the prominent method of generalized cross-validation (GCV) for choosing the crucial regularization parameter 2. The practical GCV estimate )~v and its "expected" counterpart 2v are defined as the minimizers of the GCV function V(2) and EV(2), respectively, where E denotes expectation. We investigate the asymptotic performance of 2v with respect to each of the following loss functions: the risk, an L 2 norm on the output error Kf, a --g, and a whole class of stronger norms on the input error f,~ -f0. In the special cases of data smoothing and Fourier differentiation, it is known that as n --. oo, 2v is asymptotically optimal (ao) with respect to the risk criterion. We show this to be true in general, and also extend it to the L 2 norm criterion. The asymptotic optimality is independent of the error variance, the ill-posedness of the problem and the smoothness index of the solution f0-For the input error criterion, it is shown that 2v is weakly ao for a certain class offo if the smoothness offo relative to the regularization space is not too high, but otherwise 2v is sub-optimal. This result is illustrated in the case of numerical differentiation.Mathematics Subject Classifications (1991): 65J10, 62G05, 47A50
Lukas, M.A. (2006) Robust generalized cross-validation forwhere L i are linear functionals. A prominent method for the selection of the crucial regularization parameter λ is generalized cross-validation (GCV). It is known that GCV has good asymptotic properties as n → ∞ but it may not be reliable for small or medium sized n, sometimes giving an estimate that is far too small. We propose a new robust GCV method (RGCV) which chooses λ to be the minimizer of γV (λ)where V (λ) is the GCV function, F (λ) is an approximate average measure of the influence of each data point on f λ , and γ ∈ (0, 1) is a robustness parameter. We show that for any n, RGCV is less likely than GCV to choose a very small value of λ, resulting in a more robust method.We also show that RGCV has good asymptotic properties as n → ∞ for general linear operator equations with uncorrelated errors. The function EF (λ) approximates the risk ER(λ) for values of λ that are asymptotically a bit smaller than the minimizer of ER(λ) (where V (λ) may not approximate well). The "expected" RGCV estimate is asymptotically optimal as n → ∞ with respect to the "robust risk" γER(λ) where v(λ) is the variance component of the risk, and it has the optimal decay rate with respect to ER(λ) and stronger error criteria. The GCV and RGCV methods are compared in numerical simulations for the problem of estimating the second derivative from noisy data. The results for RGCV with n = 51 are consistent with the asymptotic results, and, for a large range of γ values, RGCV is more reliable and accurate than GCV.
Several prominent methods have been developed for the crucial selection of the parameter in regularization of linear ill-posed problems with discrete, noisy data. The discrepancy principle (DP), minimum bound (MB) method and generalized cross-validation (GCV) are known to be at least weakly asymptotically optimal with respect to appropriate loss functions as the number n of data points approaches infinity. We compare these methods in three other ways. First, n is taken to be fixed and, using a discrete Picard condition, upper and lower bounds on the 'expected' DP and MB estimates are derived in terms of the optimal parameters with respect to the risk and expected error. Next, we define a simple measure of the variability of a practical estimate and, for each of the five methods, determine its asymptotic behaviour. The results are that the asymptotic stability of GCV is the same as for the unbiased risk method and is superior to that of DP, which is better than for MB and an unbiased error method. Finally, the results of numerical simulations of the five methods demonstrate that the theoretical conclusions hold in practice.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.