2022
DOI: 10.3390/s22239107
|View full text |Cite
|
Sign up to set email alerts
|

A Sign Language Recognition System Applied to Deaf-Mute Medical Consultation

Abstract: It is an objective reality that deaf-mute people have difficulty seeking medical treatment. Due to the lack of sign language interpreters, most hospitals in China currently do not have the ability to interpret sign language. Normal medical treatment is a luxury for deaf people. In this paper, we propose a sign language recognition system: Heart-Speaker. Heart-Speaker is applied to a deaf-mute consultation scenario. The system provides a low-cost solution for the difficult problem of treating deaf-mute patients… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3

Relationship

0
6

Authors

Journals

citations
Cited by 11 publications
(11 citation statements)
references
References 52 publications
(50 reference statements)
0
7
0
Order By: Relevance
“…With 63.18% and 43.78% recognition rates on the WLASL100 (2038 videos) and WLASL300 (5117 videos) datasets, respectively, and 100% test recognition accuracy on LSA64 (3200 videos), it achieves state-of-the-art performance on all the datasets. Kun Xia et al [36] presented a MobileNet-YOLOv3-based model with a hand-held device for 12 medical signs through the use of a dataset comprising 4000 samples and attained an identification accuracy of 90.77%. Da Silva et al [37] applied I3D and LSTM to train a dataset with 5000 videos for recognition of Brazilian sign language (Libras).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…With 63.18% and 43.78% recognition rates on the WLASL100 (2038 videos) and WLASL300 (5117 videos) datasets, respectively, and 100% test recognition accuracy on LSA64 (3200 videos), it achieves state-of-the-art performance on all the datasets. Kun Xia et al [36] presented a MobileNet-YOLOv3-based model with a hand-held device for 12 medical signs through the use of a dataset comprising 4000 samples and attained an identification accuracy of 90.77%. Da Silva et al [37] applied I3D and LSTM to train a dataset with 5000 videos for recognition of Brazilian sign language (Libras).…”
Section: Related Workmentioning
confidence: 99%
“…During this phase, transfer learning with a pre-trained CNN, which is MobileNetV2 [36] in this case, is used to extract features from the video frames. MobileNetV2 can achieve similar accuracy with significantly lower resource demand and fewer number of layers (53).…”
Section: Feature Extractionmentioning
confidence: 99%
“…In recent years, there has been great interest in the use of artificial intelligence techniques, specifically of CNNs for sign language recognition [ 17 , 18 , 19 ]. Below, works related to the study of sign language assisted by artificial intelligence are presented, as well as studies that implement bioinspired retina models.…”
Section: Related Workmentioning
confidence: 99%
“…As a result, the accuracy of recognizing the American alphabet is 90.04%, but with a processing time of approximately 4 s per result. In contrast, our system demonstrates both higher accuracy and improved processing time; The authors of [ 17 ] introduce an implementation of a real-time processing device based on the YoloV3 architecture, which allows recognizing the alphabet and a set of gestures at 48 FPS with an accuracy of 90.77. However, our proposed model is able to work at 55 FPS; The authors of [ 29 ] introduce a gesture classification based on the angles formed by the keypoints obtained from MediaPipe.…”
Section: Developmentmentioning
confidence: 99%
See 1 more Smart Citation