2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2015
DOI: 10.1109/cvpr.2015.7298756
|View full text |Cite
|
Sign up to set email alerts
|

ConceptLearner: Discovering visual concepts from weakly labeled image collections

Abstract: Discovering visual knowledge from weakly labeled data is crucial to scale up computer vision recognition system, since it is expensive to obtain fully labeled data for a large number of concept categories. In this paper, we propose ConceptLearner, which is a scalable approach to discover visual concepts from weakly labeled image collections. Thousands of visual concept detectors are learned automatically, without human in the loop for additional annotation. We show that these learned detectors could be applied… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2015
2015
2020
2020

Publication Types

Select...
5
3
1

Relationship

0
9

Authors

Journals

citations
Cited by 37 publications
(31 citation statements)
references
References 28 publications
(45 reference statements)
0
31
0
Order By: Relevance
“…In Guillaumin et al [2009] and Verbeek et al [2010], logistic regression models are built per tag to promote rare tags. In a similar spirit to , Zhou et al [2015] learn an ensemble of SVMs by treating tagged images as positive training examples and untagged images as candidate negative training examples. Using the ensemble to classify image regions generated by automated image segmentation, the authors assign tags at the image level and the region level simultaneously.…”
Section: Modelmentioning
confidence: 99%
“…In Guillaumin et al [2009] and Verbeek et al [2010], logistic regression models are built per tag to promote rare tags. In a similar spirit to , Zhou et al [2015] learn an ensemble of SVMs by treating tagged images as positive training examples and untagged images as candidate negative training examples. Using the ensemble to classify image regions generated by automated image segmentation, the authors assign tags at the image level and the region level simultaneously.…”
Section: Modelmentioning
confidence: 99%
“…They show that attributes depend on features in all layers of the CNN, which will be particularly relevant to our investigation of perceptual material attributes in CNNs. ConceptLearner, proposed by Zhou et al [25] uses weak supervision, in the form of images with associated text content, to discover semantic attributes. These attributes correspond to terms within the text that appear in the images.…”
Section: Materials Perception and Convolutional Neural Networkmentioning
confidence: 99%
“…Similarly for conventional object and scene recognition, attributes like "sunset" or "natural," have also been extracted for use as independent features. Shankar et al [23] generate pseudo-labels to improve the attribute prediction accuracy of a Convolutional Neural Network, and Zhou et al [25] discover concepts from weakly-supervised image data. In both cases, the attributes are considered on their own, not within the context of higher-level categories.…”
Section: Perceptual Materials Attributes In Con-volutional Neural Netwmentioning
confidence: 99%
“…LEVAN [8] explores the sub-categories of a given concept by mining bigrams from large text corpus and using the bigrams to retrieve training images from image search engines. Recently, Zhou et al [44] use noisily tagged Flickr images to train concept detectors, but do not consider the semantic similarity among different tags. Our VCD framework is able to generate the concept vocabulary for them to learn detectors.…”
Section: Related Workmentioning
confidence: 99%