2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00074
|View full text |Cite
|
Sign up to set email alerts
|

Learning a Deep ConvNet for Multi-Label Classification With Partial Labels

Abstract: Deep ConvNets have shown great performance for single-label image classification (e.g. ImageNet), but it is necessary to move beyond the single-label classification task because pictures of everyday life are inherently multilabel. Multi-label classification is a more difficult task than single-label classification because both the input images and output label spaces are more complex. Furthermore, collecting clean multi-label annotations is more difficult to scale-up than single-label annotations. To reduce th… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
142
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 196 publications
(148 citation statements)
references
References 55 publications
1
142
0
Order By: Relevance
“…Weighted loss functions is a common approach for different problems, e.g. to solve class imbalance [32], to focus on samples that are harder to predict [20], or to solve a similar problem of partial labels [11]. However, to our knowledge, this is the first attempt to use a per sample per label weighted loss for missing labels where the missing labels are unknown.…”
Section: Previous Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Weighted loss functions is a common approach for different problems, e.g. to solve class imbalance [32], to focus on samples that are harder to predict [20], or to solve a similar problem of partial labels [11]. However, to our knowledge, this is the first attempt to use a per sample per label weighted loss for missing labels where the missing labels are unknown.…”
Section: Previous Workmentioning
confidence: 99%
“…The set of missing labels is often not known. Hence, this problem of MLML is different from the problem of partial labels [11], where the position of the missing labels is known but its value is unknown, and noisy labels [31] where a set of both positive and negative labels are corrupted.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…Deep learning (DL) has shown powerful capabilities in automatically extracting nonlinear and hierarchical features. A great surge of computer vision tasks have benefited from DL and made significant breakthroughs, such as objection detection [18], natural language processing [19], and image classification [20]. As a typical classification tasks, HSI classification has been deeply influenced by DL and has obtained excellent improvements.…”
Section: Introductionmentioning
confidence: 99%
“…In the field of deep learning, the CNN (Convolutional Neural Network), R-CNN (Regions with Convolutional Neural Network features) and FCN (Fully Convolutional Network) models are rapidly developing and evolving respectively for image classification, target detection and semantic segmentation tasks. For examples, Thibaut et al proposed an end-to-end method to learn a multi-label classifier with partial labels [14]. They introduced a loss function that generalizes the standard binary cross-entropy loss by exploiting label proportion information and significantly improves performance on multi-label classification.…”
Section: Introductionmentioning
confidence: 99%