2012 Second International Workshop on Pattern Recognition in NeuroImaging 2012
DOI: 10.1109/prni.2012.31
|View full text |Cite
|
Sign up to set email alerts
|

Structured Sparsity Models for Brain Decoding from fMRI Data

Abstract: Abstract-Structured sparsity methods have been recently proposed that allow to incorporate additional spatial and temporal information for estimating models for decoding mental states from fMRI data. These methods carry the promise of being more interpretable than simpler Lasso or Elastic Net methods. However, despite sparsity has often been advocated as leading to more interpretable models, we show that by itself sparsity and also structured sparsity could lead to unstable models.We present an extension of th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
66
0

Year Published

2013
2013
2016
2016

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 51 publications
(66 citation statements)
references
References 12 publications
0
66
0
Order By: Relevance
“…Contrary to [7] and [5] that proposed to use two nested loops of proximal solvers (ISTA/FISTA), we have here a single loop. As the proximal operator for G leads to a linear system with the same operator to invert, the SVD factorization can be precomputed to speed up the computation.…”
Section: Definition 2 (Fenchel Conjugate)mentioning
confidence: 99%
See 2 more Smart Citations
“…Contrary to [7] and [5] that proposed to use two nested loops of proximal solvers (ISTA/FISTA), we have here a single loop. As the proximal operator for G leads to a linear system with the same operator to invert, the SVD factorization can be precomputed to speed up the computation.…”
Section: Definition 2 (Fenchel Conjugate)mentioning
confidence: 99%
“…For this purpose, we study empirically a large variety of decoding approaches on simulations where the ground truth is known. In addition, we contribute an efficient method that uses a prior specifically-crafted for our purpose, building upon previous work [5].…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Total variation is a popular regularizer used to enforce local smoothness in a signal (Michel et al 2011;Rudin et al 1992;Tibshirani et al 2005). It has successfully been applied in image de-noising and has recently become of particular interest in the neural imaging community where it can be used to reconstruct sparse but locally smooth brain activation (Baldassarre et al 2012b;Michel et al 2011).Two kinds of total variation are commonly considered in the literature, isotropic T V I (w) = ∇w 2,1 and anisotropic T V A (w) = ∇w 1 (Beck and Teboulle 2009). In our theoretical analysis we focus on the anisotropic penalty.…”
Section: Introductionmentioning
confidence: 99%
“…They enforce more structured constraints on the solution, such as the discriminative voxels are grouped together in possibly few clusters [33], [34], [35], [36], [37], [38], [39], [40]. In many cases, the parcellation information is not available beforehand, and therefore, ones can either use the anatomical regions as an approx [41], or use the data driven methods to obtain the grouping information [42], [43].…”
Section: Existing Extensions Of the Plain Sparse Modelmentioning
confidence: 99%