2020
DOI: 10.3390/s20216256
|View full text |Cite
|
Sign up to set email alerts
|

Sensor Fusion of Motion-Based Sign Language Interpretation with Deep Learning

Abstract: Sign language was designed to allow hearing-impaired people to interact with others. Nonetheless, knowledge of sign language is uncommon in society, which leads to a communication barrier with the hearing-impaired community. Many studies of sign language recognition utilizing computer vision (CV) have been conducted worldwide to reduce such barriers. However, this approach is restricted by the visual angle and highly affected by environmental factors. In addition, CV usually involves the use of machine learnin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

0
8

Authors

Journals

citations
Cited by 19 publications
(12 citation statements)
references
References 19 publications
0
10
0
Order By: Relevance
“…They can be divided into two groups. First, the sensor-based group [15][16][17][18][19][20] includes devices worn on the users' hands, such as gloves, EMG sensors, cables, accelerometers, touch sensors, and flexion sensors. The degree of flexion of the fingers and the fingers' motion are used as the extracted features.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…They can be divided into two groups. First, the sensor-based group [15][16][17][18][19][20] includes devices worn on the users' hands, such as gloves, EMG sensors, cables, accelerometers, touch sensors, and flexion sensors. The degree of flexion of the fingers and the fingers' motion are used as the extracted features.…”
Section: Related Workmentioning
confidence: 99%
“…In the final stage of feature extraction, the four features consist of the spatial-temporal body parts and hand relationship patterns (H 1 (t)), the spatial-temporal finger joint angle patterns (H 2 (t)), the spatial-temporal double-hand relationship patterns (H 3 (t)), and the spatial-temporal 3D hand motion trajectory patterns (H 4 (t)). These features are concatenated into one-dimensional data by means of a concatenation technique, of which the equations are shown in Equation (15). The final feature extraction in terms of spatial temporal patterns (X (t)) is the input of a stacked BiLSTM in the classification process.…”
Section: Spatial-temporal 3d Hand Motion Trajectory Patternsmentioning
confidence: 99%
See 1 more Smart Citation
“…Thirdly, aside from the aforementioned, it is recommended that two technical aspects be employed such as using a pair of gloves for data collection, as stated in [85], [85], [86], [87]. Using a combination of two gloves instead of a wide range of hand gestures can be added [88], [26].…”
Section: ) Recommendations Related To Researchersmentioning
confidence: 99%
“…Figure 7 shows the LSTM internal architecture. f t is the forget gate of a previous cell state C t−1 ,  is the logistic sigmoid non-linear activation functions [Lee et al, 2020] and  is applied on C t−1 to determine which the retrieved LSTM memory should be kept. tanh is then applied to the value between 1 and -1 (derived from the second sigmond function) with the following parameters: C t a new cell state of the LSTM memory, and x t a new input to the LSTM memory.…”
Section: Lstmmentioning
confidence: 99%