2012 IEEE Workshop on Mathematical Methods in Biomedical Image Analysis 2012
DOI: 10.1109/mmbia.2012.6164735
|View full text |Cite
|
Sign up to set email alerts
|

Max margin general linear modeling for neuroimage analyses

Abstract: General linear modeling (GLM) is one of the most commonly used approaches to perform voxel based analyses (VBA) for hypotheses testing in neuroimaging. In this paper we tie support vector machine based regression (SVR) and classical significance testing to provide the benefits of max margin estimation in the GLM setting. Using Welch-Satterthwaite approximations, we compute degrees of freedom (df) of error (also known as residual df) for ε-SVR. We demonstrate that ε-SVR can result not only in robustness of esti… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2013
2013
2013
2013

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(2 citation statements)
references
References 18 publications
0
2
0
Order By: Relevance
“…Of course one could introduce additional penalty to the OLS resulting in for example, ridge regression (Marquardt and Snee, 1975). We can also use other robust loss functions such as the ε -insensitive ℓ 1 -loss function in combination with a || β || 1 as penalty (Adluru et al, 2012). The ε -insensitive ℓ 1 -loss used in the popular support-vector regression (SVR) is defined as…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Of course one could introduce additional penalty to the OLS resulting in for example, ridge regression (Marquardt and Snee, 1975). We can also use other robust loss functions such as the ε -insensitive ℓ 1 -loss function in combination with a || β || 1 as penalty (Adluru et al, 2012). The ε -insensitive ℓ 1 -loss used in the popular support-vector regression (SVR) is defined as…”
Section: Methodsmentioning
confidence: 99%
“…Testing significance of contrasts of β s using residuals involves using ratios of residuals of nested models resulting in F -tests (please see Adluru et al (2012) for examples and details). Adluru et al (2012) also show that more effective (data-driven) definitions of the df can be used to obtain better sensitivity in rejecting the null-hypotheses.…”
Section: Methodsmentioning
confidence: 99%