2013 IEEE International Conference on Computer Vision 2013
DOI: 10.1109/iccv.2013.58
|View full text |Cite
|
Sign up to set email alerts
|

Group Norm for Learning Structured SVMs with Unstructured Latent Variables

Abstract: Latent variables models have been applied to a number of computer vision problems. However, the complexity of the latent space is typically left as a free design choice. A larger latent space results in a more expressive model, but such models are prone to overfitting and are slower to perform inference with. The goal of this paper is to regularize the complexity of the latent space and learn which hidden states are really relevant for prediction. Specifically, we propose using group-sparsity-inducing regulari… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2015
2015
2017
2017

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 30 publications
0
1
0
Order By: Relevance
“…In [11] the optimal number of mixture components is learnt using a group-sparsity inducing norm. An important difference of our 65 approach compared to these works is that we model the grouping of instances as a continuous soft-assignment instead of a discrete labeling.…”
mentioning
confidence: 99%
“…In [11] the optimal number of mixture components is learnt using a group-sparsity inducing norm. An important difference of our 65 approach compared to these works is that we model the grouping of instances as a continuous soft-assignment instead of a discrete labeling.…”
mentioning
confidence: 99%