2008 8th IEEE International Conference on Automatic Face &Amp; Gesture Recognition 2008
DOI: 10.1109/afgr.2008.4813462
|View full text |Cite
|
Sign up to set email alerts
|

Automatic hand trajectory segmentation and phoneme transcription for sign language

Abstract: This paper presents an automatic approach to segment 3-D hand trajectories and transcribe phonemes based on them, as a step towards recognizing American sign language (ASL). We first apply a segmentation algorithm which detects minimal velocity and maximal change of directional angle to segment the hand motion trajectory of naturally signed sentences. This yields oversegmented trajectories, which are further processed by a trained naïve Bayesian detector to identify true segmented points and eliminate false al… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
13
0

Year Published

2009
2009
2017
2017

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 24 publications
(12 citation statements)
references
References 12 publications
(11 reference statements)
0
13
0
Order By: Relevance
“…Yin et al (2009) used an accelerometer glove to gather information about a sign, they then applied discriminative feature extraction and 'similar state tying' algorithms, to decide sub-unit level segmentation of the data. Whereas Kong and Ranganath (2008) and Han et al (2009) looked at automatic segmentation of sign motion into sub-units, using discontinuities in the trajectory and acceleration to indicate where segments begin and end. These were then clustered into a code book of possible exemplar trajectories using either Dynamic Time Warping (DTW) distance measures Han et al or Principal Component Analysis (PCA) Kong and Ranganath. Traditional sign recognition systems use tracking and data driven approaches (Han et al, 2009;Yin et al, 2009).…”
Section: Introductionmentioning
confidence: 99%
“…Yin et al (2009) used an accelerometer glove to gather information about a sign, they then applied discriminative feature extraction and 'similar state tying' algorithms, to decide sub-unit level segmentation of the data. Whereas Kong and Ranganath (2008) and Han et al (2009) looked at automatic segmentation of sign motion into sub-units, using discontinuities in the trajectory and acceleration to indicate where segments begin and end. These were then clustered into a code book of possible exemplar trajectories using either Dynamic Time Warping (DTW) distance measures Han et al or Principal Component Analysis (PCA) Kong and Ranganath. Traditional sign recognition systems use tracking and data driven approaches (Han et al, 2009;Yin et al, 2009).…”
Section: Introductionmentioning
confidence: 99%
“…Then, using vector-quantified histograms of motion directions, the authors were able to successfully identify basic gestures, such as drawing a square or a circle on a plane, in real time. In simpler contexts with controlled motions, other authors have also used different representations, such as zero-velocity crossing [21], and velocity and direction [22]. However, these individual representations do not provide enough information.…”
Section: Introductionmentioning
confidence: 99%
“…Quite similarly, Kong and Ranganath [11] perform a segmentation of data provided by a Polhemus tracker. A Naive Bayes classifier, trained with manual annotation, is used to find false boundary points.…”
Section: Related Workmentioning
confidence: 99%