Biomag 96 2000
DOI: 10.1007/978-1-4612-1260-7_96
|View full text |Cite
|
Sign up to set email alerts
|

Current Density Reconstructions Using the L1 Norm

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

2
30
0

Year Published

2000
2000
2013
2013

Publication Types

Select...
6
1
1

Relationship

1
7

Authors

Journals

citations
Cited by 23 publications
(32 citation statements)
references
References 3 publications
2
30
0
Order By: Relevance
“…The MNE algorithms produce low resolution solutions of cortical sources spreading over multiple cortical sulci and gyri, which do not reflect the generally sparse and compact nature of most cortical activations evidenced by functional magnetic resonance imaging (fMRI) data. In an attempt to produce more physiologically plausible images than can be obtained using the MNE, the generalized minimum norm estimate (GMNE) algorithms using the L1-norm instead of the L2-norm have been explored (Matsuura and Okabe, 1995; Wagner et al, 1998; Uutela et al, 1999). The attractiveness of these approaches is that they can be solved by a linear programming (LP) method and the properties of LP guarantee that there exists an optimal solution for which the number of nonzero sources does not exceed the number of measurements and the solutions are therefore guaranteed to be sparse and compact.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…The MNE algorithms produce low resolution solutions of cortical sources spreading over multiple cortical sulci and gyri, which do not reflect the generally sparse and compact nature of most cortical activations evidenced by functional magnetic resonance imaging (fMRI) data. In an attempt to produce more physiologically plausible images than can be obtained using the MNE, the generalized minimum norm estimate (GMNE) algorithms using the L1-norm instead of the L2-norm have been explored (Matsuura and Okabe, 1995; Wagner et al, 1998; Uutela et al, 1999). The attractiveness of these approaches is that they can be solved by a linear programming (LP) method and the properties of LP guarantee that there exists an optimal solution for which the number of nonzero sources does not exceed the number of measurements and the solutions are therefore guaranteed to be sparse and compact.…”
Section: Introductionmentioning
confidence: 99%
“…In the first attempt of L1-norm GMNE (Matsuura and Okabe, 1995), such discrepancy was simply ignored due to the fact that LP could not handle it. Wagner et al (1998) proposed a new decomposition for a vector source in a coordinate system with 12 or even 20 axes to minimize the orientation discrepancy. The number of axes could theoretically be infinite.…”
Section: Introductionmentioning
confidence: 99%
“…The nonlinear L1-norm was used for regularization, minimizing the absolute values, because this norm is known to result to most focal reconstructions (20,41,42). The weighting between the minimum-norm condition and the goodness of fit of the data, Lambda, was iteratively adjusted until a desired fit error of 1/SNR was achieved (X 2 -criterion) (43).…”
Section: Procedures Iv: Current Density Reconstruction (Cdr)mentioning
confidence: 99%
“…Under such conditions, the incoherence needs to be computed between measurement supports and signal supports in transform domains (or linear expansion basis functions). While these nice properties of CS have been implemented in problems, such as magnetic resonance imaging (MRI) [16] and radar signals [17], most reported L1-norm regularizations [10,18,19] for EEG/MEG inverse problems search the sparseness in original source domain that often leads to over-focused solutions [20]. Recently, Ding [20] and Chang et al [21] have started to solve EEG/MEG inverse problems using L1-norm regularizations by exploring the sparseness in transform domains, both of which suggest promising inverse imaging characteristics over regular L1-norm regularizations [20].…”
Section: Introductionmentioning
confidence: 99%