The 25th Annual International Conference on Mobile Computing and Networking 2019
DOI: 10.1145/3300061.3300117
|View full text |Cite
|
Sign up to set email alerts
|

SignSpeaker

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 65 publications
(11 citation statements)
references
References 24 publications
0
11
0
Order By: Relevance
“…Combined with a Bi-LSTM, MyoSign achieves a classification accuracy of 93.7% for 100 different ASL words. More recently, SignSpeaker [19] has been proposed to capture hand gestures using a smartwatch platform for sign language translation. It captures IMU sensor signals from the users' smartwatch and analyzes the data using an LSTM model.…”
Section: Related Workmentioning
confidence: 99%
“…Combined with a Bi-LSTM, MyoSign achieves a classification accuracy of 93.7% for 100 different ASL words. More recently, SignSpeaker [19] has been proposed to capture hand gestures using a smartwatch platform for sign language translation. It captures IMU sensor signals from the users' smartwatch and analyzes the data using an LSTM model.…”
Section: Related Workmentioning
confidence: 99%
“…The wearable end-to-end gadgets and sensors t for sign language translation (Fang et al, 2017), (Hou et al, 2019)…”
Section: Sign Language-to-text / Language Translationmentioning
confidence: 99%
“…It is used by (Fang et al, 2017), (Jin et al, 2021 (Wang et al, 2018), (Cihan Camgoz et al, 2017), and (Hou et al, 2019). CTC is proposed by Graves et al, one of the most often used methods for developing a sequence-to-sequence model [12] and compares between speech and optical character recognition tasks (Jin et al, 2021).…”
Section: Connectionist Temporal Classification (Ctc) -mentioning
confidence: 99%
See 2 more Smart Citations