2022
DOI: 10.1038/s41598-022-20654-1
|View full text |Cite
|
Sign up to set email alerts
|

A conditional GAN-based approach for enhancing transfer learning performance in few-shot HCR tasks

Abstract: Supervised learning with the restriction of a few existing training samples is called Few-Shot Learning. FSL is a subarea that puts deep learning performance in a gap, as building robust deep networks requires big training data. Using transfer learning in FSL tasks is an acceptable way to avoid the challenge of building new deep models from scratch. Transfer learning methodology considers borrowing the architecture and parameters of a previously trained model on a large-scale dataset and fine-tuning it for low… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(6 citation statements)
references
References 50 publications
0
6
0
Order By: Relevance
“…Choosing an appropriate loss function is an essential aspect when creating a GAN model consisting of two deep learning models operating together in a competitive manner (Elaraby et al., 2022). We compare the performance of the networks with different loss functions to identify the most effective loss function.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…Choosing an appropriate loss function is an essential aspect when creating a GAN model consisting of two deep learning models operating together in a competitive manner (Elaraby et al., 2022). We compare the performance of the networks with different loss functions to identify the most effective loss function.…”
Section: Resultsmentioning
confidence: 99%
“…To address this, Conditional GANs (CGANs) were proposed. The main idea behind CGANs is to provide both the generator and the discriminator with additional conditional information, usually in the form of a label or some other kind of auxiliary data (Elaraby et al., 2022). Figure 1 presents a general architecture for a GAN and a CGAN.…”
Section: Methodsmentioning
confidence: 99%
“…For the recognition of languages with simple character structures, Elaraby et al [17]. employed the pre-trained DarkNet-53 model for Braille recognition, successfully completing the Braille recognition task.…”
Section: B Transfer Learning In Character Recognitionmentioning
confidence: 99%
“…To combat overfitting, we used the K-fold cross-validation approach. This method involves dividing the data into K similarly sized "folds," training the model on K-1 folds, and then testing it on the residual fold [10]. The process is repeated K times to get an overall assessment of the model's performance, with each fold serving as the test set once.…”
Section: 1mentioning
confidence: 99%