1973
DOI: 10.1190/1.1440378
|View full text |Cite
|
Sign up to set email alerts
|

Robust Modeling With Erratic Data

Abstract: An attractive alternative to least‐squares data modeling techniques is the use of absolute value error criteria. Unlike the least‐squares techniques the inclusion of some infinite blunders along with the data will hardly affect the solution to an otherwise well‐posed problem. An example of this great stability is seen when an average is, determined by using the median rather than the arithmetic mean. Algorithms for absolute error minimization are often approximately as costly as least‐squares algorithms; howev… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

1
413
0
5

Year Published

2001
2001
2017
2017

Publication Types

Select...
6
2

Relationship

0
8

Authors

Journals

citations
Cited by 728 publications
(429 citation statements)
references
References 6 publications
1
413
0
5
Order By: Relevance
“…Because of this feature, (1) has been used for more than three decades in several signal processing problems where sparseness is sought; some early references are [12], [37], [50], [53]. In the 1990's, seminal work on the use of ℓ 1 sparseness-inducing penalties/log-priors appeared in the literature: the now famous basis pursuit denoising (BPDN, [11,Section 5]) criterion and the least absolute shrinkage and selection operator (LASSO, [54]).…”
Section: A Backgroundmentioning
confidence: 99%
“…Because of this feature, (1) has been used for more than three decades in several signal processing problems where sparseness is sought; some early references are [12], [37], [50], [53]. In the 1990's, seminal work on the use of ℓ 1 sparseness-inducing penalties/log-priors appeared in the literature: the now famous basis pursuit denoising (BPDN, [11,Section 5]) criterion and the least absolute shrinkage and selection operator (LASSO, [54]).…”
Section: A Backgroundmentioning
confidence: 99%
“…Use of the L 1 measure of misfits is less sensitive to the influence of outliers and yields more stable model estimates (e.g. Claerbout & Muir 1973;Scales et al 1988;Farquharson & Oldenburg 1998;Tarantola 2005). The L 1 norm measure of misfit has been successfully employed in geomagnetic field modelling of recent and historical data Lesur et al 2008;Finlay et al 2012) and regional archeomagnetic field models (Pavón-Carrasco et al 2009), but it has not previously been tested in the construction of global field models on millennial time scales.…”
Section: Introductionmentioning
confidence: 99%
“…In many geophysical scenarios however, when longer tailed distributions are found empirically, a Laplacian distribution of residuals is a more suitable description and the L 1 maximum likelihood estimate of parameters is desired (Claerbout & Muir 1973;Constable 1988;. Use of the L 1 measure of misfits is less sensitive to the influence of outliers and yields more stable model estimates (e.g.…”
Section: Introductionmentioning
confidence: 99%
“…This is to reduce the effect of bad data points in the field data sets. According to Farquharson and Oldenburg (1998), and other authors (Claerbout and Muir, 1973;Menke, 1989;Press et al, 1992;Parker, 1994), this method is less sensitive to outliers in the data particularly when used with the regularised least-squares optimisation method. For each data set, the inversion is carried out using the L 2 (smooth) inversion method as well as the L 1 (blocky) norm inversion method for the model roughness filter.…”
Section: Methodsmentioning
confidence: 99%
“…This makes it particularly sensitive to bad data points (Farquharson and Oldenburg, 1998). An alternative method is to minimise the sum of the absolute values of the data misfit, or an L 1 norm measure of the data misfit (Claerbout and Muir, 1973). One simple method of implementing an L 1 norm based optimisation method using the standard least-squares formulation is the iteratively reweighted least-squares method (Wolke and Schwetlick, 1988).…”
Section: Methodsmentioning
confidence: 99%