2010
DOI: 10.1016/j.patrec.2010.02.004
|View full text |Cite
|
Sign up to set email alerts
|

A person independent system for recognition of hand postures used in sign language

Abstract: a b s t r a c tWe present a novel user independent framework for representing and recognizing hand postures used in sign language. We propose a novel hand posture feature, an eigenspace Size Function, which is robust to classifying hand postures independent of the person performing them. An analysis of the discriminatory properties of our proposed eigenspace Size Function shows a significant improvement in performance when compared to the original unmodified Size Function.We describe our support vector machine… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
40
0
7

Year Published

2014
2014
2023
2023

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 91 publications
(47 citation statements)
references
References 26 publications
(34 reference statements)
0
40
0
7
Order By: Relevance
“…We can see that the best result for subjects 21-25 were 76% with 0.70 SIFT threshold, 80% with 0.70 SIFT threshold, 66% with 0.70 SIFT threshold, 68% with 0.65 and 0.70 SIFT threshold, and 62% with 0.75 SIFT threshold for the five subjects, respectively. Since the signers of this test set (subjects [21][22][23][24][25] were different signers from the training data set and the signature library, the results of this experiment provided low classification. Furthermore, when we used SIFT with the unconstrained system and complex natural backgrounds, the matched keypoints might be incorrectly matched, as shown Figure 3g.…”
Section: Resultsmentioning
confidence: 99%
“…We can see that the best result for subjects 21-25 were 76% with 0.70 SIFT threshold, 80% with 0.70 SIFT threshold, 66% with 0.70 SIFT threshold, 68% with 0.65 and 0.70 SIFT threshold, and 62% with 0.75 SIFT threshold for the five subjects, respectively. Since the signers of this test set (subjects [21][22][23][24][25] were different signers from the training data set and the signature library, the results of this experiment provided low classification. Furthermore, when we used SIFT with the unconstrained system and complex natural backgrounds, the matched keypoints might be incorrectly matched, as shown Figure 3g.…”
Section: Resultsmentioning
confidence: 99%
“…there is no need of gloves or markers like in [13,14,34] or accelerometers like in [5]) real-time approach to the detection of intuitive motion based gestures usable in different application contexts. The learning phase of our approach does not need the capture of ground-truth real data, since the patterns are defined synthetically by using a human arm model (see Section 5.1) making it is user independent (differently to [36,37,5,15]).…”
Section: Related Workmentioning
confidence: 99%
“…Other contexts of application of this approach can be the control of multimedia menus [31] or the point of view on a virtual environment. Other motion based gestures recognition could allow the interpretation of sign languages [9,13].…”
Section: Introductionmentioning
confidence: 99%
“…La comunicación no verbal en personas con discapacidad auditiva involucra una serie de aspectos relacionados con los movimientos gestuales de la mano como son las posturas, en donde se encuentran la forma y la orientación de la mano, o los gestos temporales, los cuales se relacionan con su movimiento y posición [9,10], y de manera general con los estados de transición y las posturas del cuerpo humano [11]. Para el reconocimiento de un movimiento gestual desde las posturas y los gestos temporales, se han desarrollado una serie de trabajos basados en el procesamiento digital de imágenes sobre video, los cuales reconocen un movimiento gestual en tiempo real, mediante la utilización de técnicas como la transformada wavelet, o las redes neuronales por aprendizaje no supervisado [11].…”
unclassified
“…Para el reconocimiento de un movimiento gestual desde las posturas y los gestos temporales, se han desarrollado una serie de trabajos basados en el procesamiento digital de imágenes sobre video, los cuales reconocen un movimiento gestual en tiempo real, mediante la utilización de técnicas como la transformada wavelet, o las redes neuronales por aprendizaje no supervisado [11]. Para el caso del reconocimiento de movimientos gestuales a partir de imágenes que describen trayectorias sobre puntos específicos ubicados en la mano, se han utilizado sistemas como el Kinect Motion de Microsoft (www.xbox.com/es-ES/kinect), o modelos que utilizan regresiones no lineales múltiples para la descripción de dichas trayectorias en el espacio [12].…”
unclassified