18th International Conference on Pattern Recognition (ICPR'06) 2006
DOI: 10.1109/icpr.2006.814
|View full text |Cite
|
Sign up to set email alerts
|

Motion Features from Lip Movement for Person Authentication

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
17
0

Year Published

2010
2010
2020
2020

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(18 citation statements)
references
References 3 publications
1
17
0
Order By: Relevance
“…A Bayesian classifier is used for classification, obtaining an average recognition rate of 90% at a false alarm rate of 5%. In other work, a new motion based feature extraction technique for speaker identification using orientation estimation in 2D manifolds is reported [15]. The motion is estimated by computing the components of the structure tensor from which normal flows are extracted.…”
Section: A Related Workmentioning
confidence: 99%
“…A Bayesian classifier is used for classification, obtaining an average recognition rate of 90% at a false alarm rate of 5%. In other work, a new motion based feature extraction technique for speaker identification using orientation estimation in 2D manifolds is reported [15]. The motion is estimated by computing the components of the structure tensor from which normal flows are extracted.…”
Section: A Related Workmentioning
confidence: 99%
“…Optical flow is the most common and easy to extract visual feature. In [10] dense optical flow is first calculated, then these dense velocity vectors are quantized by allowing only 3 directions (0 0 , 45 0 ,-45 0 ), and only 20 values resulting in a feature vector of 40 parameters. These quantization values were obtained by fuzzy c-means clustering.…”
Section: Dynamicmentioning
confidence: 99%
“…Hybrid Methods use a combination of static and dynamic information [6], [18]- [22]. [16] DYNAMIC TI + AUDIO XM2VTS 295 EER 2 SANCHEZ [23] DYNAMIC TD + FACE XM2VTS 295 HTER 2.62 SANCHEZ [23] DYNAMIC TD + AUDIO XM2VTS 295 HTER 0.70 SANCHEZ [23] DYNAMIC TD + FACE + AUDIO XM2VTS 295 HTER 0.66 SANCHEZ [23] DYNAMIC TD + 2FACE + 2AUDIO XM2VTS 295 HTER 0.15 ABDULLA [21] HYBRID(SHAPE AND INTENSITY) CUSTOM 35 EER 18.0 CETINGUL [19] HYBRID(TEXTURE AND MOTION) MVGL-AVD 50 EER 3.6 CETINGUL [18] STATIC(TEXTURE)+DYNAMIC+AUDIO MVGL-AVD 50 EER 0.4 JOURLIN [5] STATIC(SHAPE) + AUDIO M2VTS 37 HTER 1.65 Table I provides an overview of the performance of various lip-biometric systems that perform speaker verification using only lip features. Table II provides an overview of the performance of various lip-biometric systems that perform speaker verification using lip features fused with other biometric traits such as audio and face.…”
Section: Relevant Workmentioning
confidence: 99%