2020 11th International Conference on Computing, Communication and Networking Technologies (ICCCNT) 2020
DOI: 10.1109/icccnt49239.2020.9225392
|View full text |Cite
|
Sign up to set email alerts
|

AI for Accessibility: Virtual Assistant for Hearing Impaired

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
9
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(10 citation statements)
references
References 12 publications
0
9
0
Order By: Relevance
“…Liang et al [117] Windows desktop British Dementia screening Zhou et al [118] iOS Hong Kong Translation Ozarkar et al [119] Android Indian Translation Joy et al [120] Android Indian Learning Paudyal et al [121] Android American Learning Luccio et al [122] Android Multiple Learning Chaikaew et al [123] Android, iOS Thai Learning Ku et al [124] -American Translation Potamianos et al [125] -Greek Learning Lee et al [126] -Korean Translation Schioppo et al [127] -American Learning Bansal et al [128] -American Learning Quandt et al [129] -American Learning…”
Section: Methods Operating System Sign Language Scenariomentioning
confidence: 99%
See 1 more Smart Citation
“…Liang et al [117] Windows desktop British Dementia screening Zhou et al [118] iOS Hong Kong Translation Ozarkar et al [119] Android Indian Translation Joy et al [120] Android Indian Learning Paudyal et al [121] Android American Learning Luccio et al [122] Android Multiple Learning Chaikaew et al [123] Android, iOS Thai Learning Ku et al [124] -American Translation Potamianos et al [125] -Greek Learning Lee et al [126] -Korean Translation Schioppo et al [127] -American Learning Bansal et al [128] -American Learning Quandt et al [129] -American Learning…”
Section: Methods Operating System Sign Language Scenariomentioning
confidence: 99%
“…Moreover, the application does not run in real-time. On the other hand, Ozarkar et al in [ 119 ], implemented a smartphone application consisting of three modules. The sound classification module detected and classified input sounds and alerted the user through vibrations.…”
Section: Applicationsmentioning
confidence: 99%
“…After developing and training the CNN model, the best performing Keras model was converted to TensorFlow.js to be deployed. Other examples of work that applies transfer learning include [89] which did transfer learning on InceptionV3, Resnet50, MobileNet, and [32] on PoseNet.…”
Section: Customizing Existing Models Using Transfer Learningmentioning
confidence: 99%
“…An object-based sound feedback system was developed in [85] that uses the SSD-MobileNetV2 object detection algorithm to assist visually impaired individuals in comprehending their environment via sound production. In [89], for Indian sign language recognition, transfer learning was used where pre-trained MobileNet model output features were fed to a KNN classifier to identify actions and predict respective words.…”
Section: Classification and Detection Appsmentioning
confidence: 99%
“…Furthermore, the programme is not real-time. In contrast, Ozarkar et al (Ozarkar-Chetwani, et al, 2020) developed a three-module smartphone application. The sound classification module recognised and classified input sounds, and vibrations notified the user.…”
Section: Related Studiesmentioning
confidence: 99%