2011
DOI: 10.1007/978-3-642-22092-0_50
|View full text |Cite
|
Sign up to set email alerts
|

Generalized Sparse Regularization with Application to fMRI Brain Decoding

Abstract: Abstract. Many current medical image analysis problems involve learning thousands or even millions of model parameters from extremely few samples. Employing sparse models provides an effective means for handling the curse of dimensionality, but other propitious properties beyond sparsity are typically not modeled. In this paper, we propose a simple approach, generalized sparse regularization (GSR), for incorporating domain-specific knowledge into a wide range of sparse linear models, such as the LASSO and grou… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
123
0
2

Year Published

2012
2012
2020
2020

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 90 publications
(125 citation statements)
references
References 27 publications
0
123
0
2
Order By: Relevance
“…Following hidden layers are PCA. When training the first monolayer autoencoder KL regularization and weight decay regularization are used (Ng, 2011). When training the next monolayer autoencoders only KL is used.…”
Section: Methods Of Traffic Areas Determinationmentioning
confidence: 99%
“…Following hidden layers are PCA. When training the first monolayer autoencoder KL regularization and weight decay regularization are used (Ng, 2011). When training the next monolayer autoencoders only KL is used.…”
Section: Methods Of Traffic Areas Determinationmentioning
confidence: 99%
“…Auto-encoder is based on the concept of Sparse coding [11]. AE can be considered as a discriminative DNN in which the target output would be similar to the input, and the number of hidden layer nodes is lower than input.…”
Section: Deep Learning Approachesmentioning
confidence: 99%
“…A feature vector can be computed as h= f θ (x) from an input x with a form f θ (x)=(b+W e x), where  is the logistic sigmoid function, serving as the activation function (Ng, 2011):…”
Section: Basic Structurementioning
confidence: 99%
“…To prevent overfitting, a regularization term is also added to the Euclidean norm of the difference. The overall cost function is (Ng, 2011): The weight parameter λ controls the relative importance of the regularization term compared to the reconstruction error term. One can learn the parameters θ={W e , b, W d , d} of the encoder and the decoder simultaneously by minimizing the cost function J().…”
Section: Basic Structurementioning
confidence: 99%