2020
DOI: 10.1007/978-981-15-5619-7_25
|View full text |Cite
|
Sign up to set email alerts
|

Improving Siamese Networks for One-Shot Learning Using Kernel-Based Activation Functions

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
11
0
2

Year Published

2020
2020
2023
2023

Publication Types

Select...
3
3
3

Relationship

1
8

Authors

Journals

citations
Cited by 23 publications
(13 citation statements)
references
References 4 publications
0
11
0
2
Order By: Relevance
“…Generative modeling is attractive for many reasons: i) Modelization of the latent space: Generative models express causal relations. ii) Generative models were used in semi-supervised learning settings, to improve classification [3,6,7,8,9].…”
Section: Related Workmentioning
confidence: 99%
“…Generative modeling is attractive for many reasons: i) Modelization of the latent space: Generative models express causal relations. ii) Generative models were used in semi-supervised learning settings, to improve classification [3,6,7,8,9].…”
Section: Related Workmentioning
confidence: 99%
“…Some maxpooling layers dropout layers were applied for avoiding the overfitting and reducing the network size. Although we used the Unet in this experiment, we note that the structure of the CNN can be flexible by introducing more one-shot deep learning-based networks architectures design tricks, works on this direction could be found in the literature ofKoch [14], Vinyals et al [22],Shaban et al [20], Chen et al [2], Jadon and Srinivasan [11]. An adam stochastic gradient optimizer with learning decay rate lr = lr/epochs number initialized to lr = 0.1 is used for training the network with the binary cross entropy loss function.…”
Section: Network Architecture and Training Detailsmentioning
confidence: 99%
“…The last layers of these networks are fed to a contrastive loss function layer, which calculates the similarity between the two inputs. The whole idea of using Siamese architecture [7] [5] is not to classify between classes but to learn to discriminate between inputs. So, it needed a differentiating form of loss function known as the contrastive…”
Section: B Siamese Networkmentioning
confidence: 99%