2020
DOI: 10.48550/arxiv.2008.06365
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

An Overview of Deep Learning Architectures in Few-Shot Learning Domain

Abstract: Since 2012, Deep learning has revolutionized Artificial Intelligence and has achieved state-of-the-art outcomes in different domains, ranging from Image Classification to Speech Generation. Though it has many potentials, our current architectures come with the pre-requisite of large amounts of data. Few-Shot Learning (also known as one-shot learning) is a subfield of machine learning that aims to create such models that can learn the desired objective with less data, similar to how humans learn. In this paper,… Show more

Help me understand this report
View published versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
11
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
6
3

Relationship

0
9

Authors

Journals

citations
Cited by 11 publications
(13 citation statements)
references
References 11 publications
0
11
0
Order By: Relevance
“…Domain adaption for natural language processing and computer vision tasks is widely studied (Blitzer et al, 2007;Mansour et al, 2008;Daumé III, 2009;Sandu et al, 2010;Foster et al, 2010;Wang and Cardie, 2013;Sun et al, 2016;Liu et al, 2019bLiu et al, , 2020bGururangan et al, 2020;Winata et al, 2020;Jadon, 2020;Yin, 2020;Liu et al, 2020a,d). However, little has been done to investigate domain adaption for the abstractive summarization task.…”
Section: Domain Adaptationmentioning
confidence: 99%
“…Domain adaption for natural language processing and computer vision tasks is widely studied (Blitzer et al, 2007;Mansour et al, 2008;Daumé III, 2009;Sandu et al, 2010;Foster et al, 2010;Wang and Cardie, 2013;Sun et al, 2016;Liu et al, 2019bLiu et al, , 2020bGururangan et al, 2020;Winata et al, 2020;Jadon, 2020;Yin, 2020;Liu et al, 2020a,d). However, little has been done to investigate domain adaption for the abstractive summarization task.…”
Section: Domain Adaptationmentioning
confidence: 99%
“…In this equation, Y is the label of a given pair (0 for negative pairs and 1 for positive pairs), D w is a similarity index describing the similarity between the embeddigs of the samples present in the pair and m is parameter known as margin. This function is consisted of two terms; the first term is supposed to represent observations of similar classes as closely possible, while the second term is responsible to increase the dissimilarity of the observations from different classes up to the highest extent (Jadon, 2020).…”
Section: Contrastive Learning and Siamese Neural Networkmentioning
confidence: 99%
“…Therefore, in the constructed embedding by supervised contrastive learning, samples with the same label are placed near each other and those with different labels are placed far apart from each other. This approach is an example of few-shot learning methods (Jadon, 2020). Unsupervised contrastive learning incorporates data augmentation.…”
Section: Contrastive Representation Learningmentioning
confidence: 99%
“…According to the contrastive loss equation, the first term is supposed to represent similar observations as closely as possible, while the second term is supposed to increase the distance between dissimilar observations. (Jadon, 2020).…”
Section: Contrastive Representation Learningmentioning
confidence: 99%