2009
DOI: 10.1016/j.patrec.2008.12.010
|View full text |Cite
|
Sign up to set email alerts
|

Modelling and segmenting subunits for sign language recognition based on hand motion analysis

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
49
0
3

Year Published

2011
2011
2023
2023

Publication Types

Select...
4
2
2

Relationship

0
8

Authors

Journals

citations
Cited by 79 publications
(53 citation statements)
references
References 9 publications
0
49
0
3
Order By: Relevance
“…We can see that the best result for subjects 21-25 were 76% with 0.70 SIFT threshold, 80% with 0.70 SIFT threshold, 66% with 0.70 SIFT threshold, 68% with 0.65 and 0.70 SIFT threshold, and 62% with 0.75 SIFT threshold for the five subjects, respectively. Since the signers of this test set (subjects [21][22][23][24][25] were different signers from the training data set and the signature library, the results of this experiment provided low classification. Furthermore, when we used SIFT with the unconstrained system and complex natural backgrounds, the matched keypoints might be incorrectly matched, as shown Figure 3g.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…We can see that the best result for subjects 21-25 were 76% with 0.70 SIFT threshold, 80% with 0.70 SIFT threshold, 66% with 0.70 SIFT threshold, 68% with 0.65 and 0.70 SIFT threshold, and 62% with 0.75 SIFT threshold for the five subjects, respectively. Since the signers of this test set (subjects [21][22][23][24][25] were different signers from the training data set and the signature library, the results of this experiment provided low classification. Furthermore, when we used SIFT with the unconstrained system and complex natural backgrounds, the matched keypoints might be incorrectly matched, as shown Figure 3g.…”
Section: Resultsmentioning
confidence: 99%
“…This allows the system to be able to recognise hand sign words that have similar gestures. However, when we tested our algorithm with the test subject without any constraint on five signers (subjects [21][22][23][24][25], who were asked to stand in front of various complex backgrounds and could wear any shirt, the best correct classification rate in this case was around 70-80% on average.…”
Section: Discussionmentioning
confidence: 99%
“…Han et al [8] perform a segmentation of data based on linguistic rules, such as change of hand motion and discontinuities surrounding the subunit boundary.…”
Section: Related Workmentioning
confidence: 99%
“…Yin et al [110] used an accelerometer glove to gather information about a sign, before applying discriminative feature extraction and similar state tying algorithms, to decide sub-unit level segmentation of the data. Kong et al [58] and Han et al [41] have looked at automatic segmentation of the motions of sign into sub-units, using discontinuities in the trajectory and acceleration, to indicate where segments begin and end, these are then clustered into a code book of possible exemplar trajectories using either DTW distance measures, in the case of Han et al or PCA features by Kong et al…”
Section: Phoneme Level Representationsmentioning
confidence: 99%