2019
DOI: 10.1186/s13640-018-0371-x
|View full text |Cite
|
Sign up to set email alerts
|

Semantic embeddings of generic objects for zero-shot learning

Abstract: Zero-shot learning (ZSL) models use semantic representations of visual classes to transfer the knowledge learned from a set of training classes to a set of unknown test classes. In the context of generic object recognition, previous research has mainly focused on developing custom architectures, loss functions, and regularization schemes for ZSL using word embeddings as semantic representation of visual classes. In this paper, we exclusively focus on the affect of different semantic representations on the accu… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
3
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 7 publications
(4 citation statements)
references
References 23 publications
0
3
0
Order By: Relevance
“…On the other hand, we focus on generic object recognition and make close to no assumption regarding the nature of these descriptions. The closest work to ours is probably Hascoet et al [9], in which different methods to obtain prototypes from WordNet definitions are evaluated, but reported performance is significantly below that of usual word embeddings.…”
Section: Introduction and Related Workmentioning
confidence: 91%
“…On the other hand, we focus on generic object recognition and make close to no assumption regarding the nature of these descriptions. The closest work to ours is probably Hascoet et al [9], in which different methods to obtain prototypes from WordNet definitions are evaluated, but reported performance is significantly below that of usual word embeddings.…”
Section: Introduction and Related Workmentioning
confidence: 91%
“…Veeranna et al ( 2016), adopted pre-trained word embedding for measuring semantic similarity between a label and documents. Further endeavor has been spent on zero-shot learning using semantic embedding by (Hascoet et al, 2019;Zhang et al, 2019;Xie and Virtanen, 2021;Rios and Kavuluru, 2018;Yin et al, 2019;Xia et al, 2018;Zhang et al, 2019;Pushp and Srivastava, 2017;Puri and Catanzaro, 2019;Yogatama et al, 2017;Pushp and Srivastava, 2017;Chen et al, 2021;Gong and Eldardiry, 2021).…”
Section: Topic Modeling and Inferencementioning
confidence: 99%
“…In other words, it addresses multi-class learning problems when some classes do not have sufficient training data. However, during the learning process, additional visual and semantic features such as word embeddings [132], visual attributes [133], or descriptions [134] can be assigned to both seen and unseen classes. In the context of multimodality, a multimodal mapping scheme typically combines visual and semantic attributes using only data related to the seen classes.…”
Section: Zero-shot Learningmentioning
confidence: 99%