2019 International Conference on Computing, Networking and Communications (ICNC) 2019
DOI: 10.1109/iccnc.2019.8685536
|View full text |Cite
|
Sign up to set email alerts
|

Fine-tuning a pre-trained Convolutional Neural Network Model to translate American Sign Language in Real-time

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(22 citation statements)
references
References 9 publications
0
22
0
Order By: Relevance
“…In transfer learning, we first train a base model on a primary dataset/problem, and then we reuse the deep features, or transfer them, to a second target model which will be trained on a target dataset/problem as in [41]. Fine-tuning is the most common approach to transfer learning and improves the generalization ability of the model used [5]. To this end, the weights of the pre-trained CNN models are fine-tuned by continuing the backpropagation operation.…”
Section: Fine-tuningmentioning
confidence: 99%
“…In transfer learning, we first train a base model on a primary dataset/problem, and then we reuse the deep features, or transfer them, to a second target model which will be trained on a target dataset/problem as in [41]. Fine-tuning is the most common approach to transfer learning and improves the generalization ability of the model used [5]. To this end, the weights of the pre-trained CNN models are fine-tuned by continuing the backpropagation operation.…”
Section: Fine-tuningmentioning
confidence: 99%
“…Each class encompasses 3000 images, with 26 classes corresponding to the 26 American sign language alphabets, and other classes allocated for space, deletion, and nothing. Dataset images are in RBG format with 200 × 3200 pixels dimensions and different variations [ 38 ].…”
Section: Methodsmentioning
confidence: 99%
“…They achieved an accuracy of 94.30%. In the same context, [ 38 ] used the same dataset of 87,000 images for classification. They used (AlexNet and GooLeNet) models, and their overall training results were 99.39% for AlexNet and 95.52% for GoogLeNet.…”
Section: Related Workmentioning
confidence: 99%
“…Custom datasets are used in this approach to make it more accurate and robust when compared with the models that trained with the less number of datasets [9]. Fingertip finder algorithm is used for gesture recognition which is a combination of convex hull and K-curvature for the hand gesture recognition [10].…”
Section: Related Workmentioning
confidence: 99%