2022
DOI: 10.1016/j.ijcce.2022.01.003
|View full text |Cite|
|
Sign up to set email alerts
|

Deep learning based assistive technology on audio visual speech recognition for hearing impaired

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
8
1

Relationship

0
9

Authors

Journals

citations
Cited by 50 publications
(20 citation statements)
references
References 11 publications
0
15
0
Order By: Relevance
“…The paper has presented current state of the audio-visual speech recognition area as well as potential research directions. Kumar et al (2022) proposed a deep learning technique-based audiovisual speech recognition system for hearing impaired people. Hearing challenged students confront several problems, including a lack of skilled sign language facilitators and the expensive cost of assistive technology.…”
Section: Literature Reviewsmentioning
confidence: 99%
“…The paper has presented current state of the audio-visual speech recognition area as well as potential research directions. Kumar et al (2022) proposed a deep learning technique-based audiovisual speech recognition system for hearing impaired people. Hearing challenged students confront several problems, including a lack of skilled sign language facilitators and the expensive cost of assistive technology.…”
Section: Literature Reviewsmentioning
confidence: 99%
“…Analyzing a student's performance using NB classifier is one of the classification methods used to recognize hidden relations between subjects in Sijil Pelajaran Malaysia. The algorithm can be implemented for performance classification during the early stage of 2nd semester achieving an accuracy of 74% [12]. Works implemented in [13] involved building a recurrent neural network (RNN) for forecasting students' final grades from log information in education systems.…”
Section: A Literature Reviewmentioning
confidence: 99%
“…CNN is now taking the place of traditional MFCC, DCT, and AAM in aspects of audio and visual feature extractions. Also, HMM and MSHMM are currently being substituted by Long Short-Time Memory networks (LSTM) or Bidirectional Long-Short-Term Memory networks (Bi-LSTM) in aspects of time sequence modeling [83,84,85]. As the audio information is not completely available for the hearing-impaired, the use of a relevant acoustic feature extraction method is very significant for AVSR.…”
Section: Audio-visual Speech Recognitionmentioning
confidence: 99%