2015
DOI: 10.1007/978-3-319-23234-8_65
|View full text |Cite
|
Sign up to set email alerts
|

Modalities Combination for Italian Sign Language Extraction and Recognition

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1

Citation Types

0
9
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
3
2
1

Relationship

2
4

Authors

Journals

citations
Cited by 6 publications
(9 citation statements)
references
References 24 publications
0
9
0
Order By: Relevance
“…Human poses are important cues for video analysis in a variety of tasks such as activity/action recognition [9,10], multi-object detection [11], and sign language processing and recognition [12]. Generally, HPE approaches can be divided into two main groups: traditional HPE approaches and deep learning-based ones.…”
Section: Related Workmentioning
confidence: 99%
“…Human poses are important cues for video analysis in a variety of tasks such as activity/action recognition [9,10], multi-object detection [11], and sign language processing and recognition [12]. Generally, HPE approaches can be divided into two main groups: traditional HPE approaches and deep learning-based ones.…”
Section: Related Workmentioning
confidence: 99%
“…Liang et al [161] use rather the joint modality to implement a similar scheme and decide on the dominant hand in motion. The CD approach presented by Seddik et al [159] uses the joint inputs to segment both the streams and the learning population into right, left and bihanded actions (see Fig. 1.9), thus allowing classifiers specialization in later steps.…”
Section: Unimodal Temporal Segmentation Approachesmentioning
confidence: 99%
“…Fig. 1.9 Example temporal segmentation results: (a) and (b) Left and right hands motion curves, (c) TS results and (d) the ground truth[159] …”
mentioning
confidence: 99%
“…We exploit here the joint‐relative descriptors efficiency and the sparse representations high linear separability. We extend our previous research [20, 21] with what follows: (i) compared to [20], we improve the joint normalisation, the RGB and depth feature binning and use richer BoVW representations. Furthermore, (ii) we evaluate the local–global approach of [21] within multiple fusion configurations using a variety of features concatenations.…”
Section: Introductionmentioning
confidence: 99%