2020 International Conference on Information Science and Communication Technology (ICISCT) 2020
DOI: 10.1109/icisct49550.2020.9080028
|View full text |Cite
|
Sign up to set email alerts
|

Two-way Smart Communication System for Deaf & Dumb and Normal People

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1

Citation Types

0
4
0

Year Published

2022
2022
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(4 citation statements)
references
References 6 publications
0
4
0
Order By: Relevance
“…The system determines the sensor values and uses the threshold to determine which gesture it is. To validate the gesture probabilistic model is used, a very inefficient method was used in [35], [45] and [46], where the gestures were recorded manually. The user has to make the gesture and select the alphabet.…”
Section: Resultsmentioning
confidence: 99%
“…The system determines the sensor values and uses the threshold to determine which gesture it is. To validate the gesture probabilistic model is used, a very inefficient method was used in [35], [45] and [46], where the gestures were recorded manually. The user has to make the gesture and select the alphabet.…”
Section: Resultsmentioning
confidence: 99%
“…Figure 5 representing that which dataset is the most used and which is the least used and as per the resulting signal based EMG data has only used twice in Khan et al (2020b) and Khan et al (2020a) whereas dataset which contains the images of sign used four times in Halim & Abbas (2014) , Kanwal et al (2014) , Nasir et al (2014) and Imran et al (2021a) . The most used type of dataset are the character-based datasets in Chandio et al (2020) , Naseem et al (2019) , Sagheer et al (2010) , Ahmad et al (2017) , Sami, (2014) , Husnain et al (2019) , Gul et al (2020) Arafat & Iqbal (2020) and Ahmed et al (2017) . Also, in Fig.…”
Section: Resultsmentioning
confidence: 99%
“…The study is either published in a journal or conference. From Table 3 , we can analyzed that the SVM ( Chandio et al, 2020 ; Imran et al, 2021a ; Sagheer et al, 2010 ; Ahmad et al, 2017 ; Khan et al, 2020b ; Ahmed et al, 2017 ; Imran et al, 2021b ) and Neural Network ( Chandio et al, 2020 ; Naseem et al, 2019 ; Ahmad et al, 2007 ; Arafat & Iqbal, 2020 ; Sagheer et al, 2009 ; Naz et al, 2015 ; Ul-Hasan et al, 2013 ) is the commonly used classifier by researchers for the detection of Urdu Sign Language other than these both rest of the classifiers used only once i.e., DTW ( Halim & Abbas, 2014 ), HMM ( Gul et al, 2020 ).…”
Section: Resultsmentioning
confidence: 99%
“…The definition applauds a modern form of communicating by integrating the identification of hand motion with speech transition. They developed an android app for speech/text conversion with the aid of the Google API and have many required functionality such as emergency calls and position monitoring for treatment purposes ( Gul et al, 2020 ). Nasir et al (2014) propose a voice interpreter artificial sign language that starts by recording the 3D video stream by Kinect, and then focuses on the joints of concern in the human skeleton.…”
Section: Related Workmentioning
confidence: 99%