Semi-Supervised Learning 2006
DOI: 10.7551/mitpress/6173.003.0031
|View full text |Cite
|
Sign up to set email alerts
|

Metric-Based Approaches for Semi-Supervised Regression and Classification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4

Citation Types

0
4
0

Year Published

2011
2011
2011
2011

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 0 publications
0
4
0
Order By: Relevance
“…It is interesting that, empirically for regression estimation, a quite aggressive multiplicative penalty outperforms an additive penalty based on the same idea, since most regularization strategies currently in use (including Bayesian MAP estimation) employ additive penalties. We further note that the additive criterion in [79], as applied to regression estimation with squared-error loss, has already been suggested in [16]. However, Cataltepe et al [16] do not give a very convincing theoretical motivation.…”
Section: Adaptive Regularization Criteriamentioning
confidence: 79%
See 3 more Smart Citations
“…It is interesting that, empirically for regression estimation, a quite aggressive multiplicative penalty outperforms an additive penalty based on the same idea, since most regularization strategies currently in use (including Bayesian MAP estimation) employ additive penalties. We further note that the additive criterion in [79], as applied to regression estimation with squared-error loss, has already been suggested in [16]. However, Cataltepe et al [16] do not give a very convincing theoretical motivation.…”
Section: Adaptive Regularization Criteriamentioning
confidence: 79%
“…However, the technique might still exhibit overfitting, simply because the triangle inequality ( 15) is usually far from tight. An extension of this strategy, called ADJ, attempts a first-order bias correction between the pseudometric d and its estimated version, say d. In a later paper [79], Schuurmans and Southey get rid of the a-priori hierarchy and focus on criteria which are additive or multiplicative combinations of the empirical loss d(h, P (t|x)) and a penalty. They then propose penalties based on the idea that overfitting of h can sometimes be detected by comparing, for some fixed origin function φ, the distances d(h, φ) (which can be estimated reliably using 29 It is not our plan to go into discussions about foundations of Occam's razor here.…”
Section: Adaptive Regularization Criteriamentioning
confidence: 99%
See 2 more Smart Citations