The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2022
DOI: 10.1007/978-3-031-22419-5_26
|View full text |Cite
|
Sign up to set email alerts
|

TSPNet-HF: A Hand/Face TSPNet Method for Sign Language Translation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
5
0

Year Published

2023
2023
2023
2023

Publication Types

Select...
1

Relationship

0
1

Authors

Journals

citations
Cited by 1 publication
(5 citation statements)
references
References 11 publications
0
5
0
Order By: Relevance
“…Ten papers use 3D CNNs for feature extraction [35,66,67,76,82,83,86,88,96,99]. These networks are able to extract spatio-temporal features, leveraging the temporal relations between neighboring frames in video data.…”
Section: Extraction Methodsmentioning
confidence: 99%
See 4 more Smart Citations
“…Ten papers use 3D CNNs for feature extraction [35,66,67,76,82,83,86,88,96,99]. These networks are able to extract spatio-temporal features, leveraging the temporal relations between neighboring frames in video data.…”
Section: Extraction Methodsmentioning
confidence: 99%
“…A simple approach to feature extraction is to consider full video frames as inputs. Performing further pre-processing of the visual information to target hands, face and pose information separately (referred to as a multi-cue approach) improves the performance of SLT models [36,59,65,75,80,86,96]. Zheng et al [75] show through qualitative analysis that adding facial feature extraction improves translation accuracy in utterances where facial expressions are used.…”
Section: Multi-cue Approachesmentioning
confidence: 99%
See 3 more Smart Citations