2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) 2019
DOI: 10.1109/cvpr.2019.00961
|View full text |Cite
|
Sign up to set email alerts
|

Attentive Region Embedding Network for Zero-Shot Learning

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
157
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 224 publications
(157 citation statements)
references
References 37 publications
0
157
0
Order By: Relevance
“…Attention in ZSL The attention module has been applied for zero-shot learning for localization. Liu et al (2019c) introduces localization on the semantic information and Xie et al (2019) proposes localization on the visual feature. Zhu et al (2019c) further extends it to multi-localization guided by semantic.…”
Section: Related Workmentioning
confidence: 99%
“…Attention in ZSL The attention module has been applied for zero-shot learning for localization. Liu et al (2019c) introduces localization on the semantic information and Xie et al (2019) proposes localization on the visual feature. Zhu et al (2019c) further extends it to multi-localization guided by semantic.…”
Section: Related Workmentioning
confidence: 99%
“…In a case where all attributes form all-OR group, It becomes similar to ESZSL [137] and learns a bilinear compatibility function. AREN [190] uses attentive region embedding while learning the bilinear mapping to the semantic space in order to enhance the semantic transfer. ZSLPP [38] combines two networks VPDE-net for detecting bird parts from images and PZSC-net that trains a part-based Zero-Shot classifier from the noisy text of the Wikipedia.…”
Section: Label Embeddingmentioning
confidence: 99%
“…A multi-attention loss encourages compact and diverse attention distribution by applying geometric constraints over attention maps. The AREN model [35] discovers multiple semantic parts of images guided by an attention mechanism and the compatibility loss. The model is also coupled with a paralleled network branch to guarantee more stable semantic transfer from the perspective of second-order collaboration.…”
Section: Attention Mechanisms In Zslmentioning
confidence: 99%
“…Conventionally, training a traditional classi cation model requires at least some data samples for all target classes, and deep learning models signi cantly amplify this issue. Collecting training instances of every class is not always easy, especially in ne-grained image classi cation [19,20,36], and therefore much attention has been given to zero-shot learning (ZSL) algorithms as a solution [3,5,10,17,18,21,22,25,27,35,38]. ZSL expands the classi ers beyond the seen classes with abundant data to unseen classes without enough image samples.…”
Section: Introductionmentioning
confidence: 99%