2007
DOI: 10.1016/j.patrec.2007.02.017
|View full text |Cite
|
Sign up to set email alerts
|

Audio–visual person authentication using lip-motion from orientation maps

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2007
2007
2016
2016

Publication Types

Select...
5
2
1

Relationship

2
6

Authors

Journals

citations
Cited by 41 publications
(17 citation statements)
references
References 35 publications
0
17
0
Order By: Relevance
“…Thereby, they relied on small photographs (low resolution) and deformations caused by expression changes. Some approaches involved the speech modality ("talking face") [1,4], because the fused audio-visual features are easier to classify into live/non-live. Within this paradigm, that targets on attacks both via photographs and videos, [3] is of interest on its own because the latter showed the possibility of recognizing utterances (digits 0-9) from lip motion only.…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…Thereby, they relied on small photographs (low resolution) and deformations caused by expression changes. Some approaches involved the speech modality ("talking face") [1,4], because the fused audio-visual features are easier to classify into live/non-live. Within this paradigm, that targets on attacks both via photographs and videos, [3] is of interest on its own because the latter showed the possibility of recognizing utterances (digits 0-9) from lip motion only.…”
Section: Related Workmentioning
confidence: 99%
“…We used the same video database for error quantification in section 3.1. Within the suggested categorization in table 1, [2] investigated countermeasures of type III (exploiting 3D properties), whereas the talking face studies [1,4] dealt with extended type II measures (exploiting mouth movements). The approach of [8] is assignable to all three types of countermeasures but it is unable to spatially localize the origin of temporal variations.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Faraj and Bigun [41] described a new identity authentication technique by a synergetic use of lip-motion and speech. The lip-motion is defined as the distribution of apparent velocities in the movement of brightness patterns in an image and is estimated by computing the velocity components of the structure tensor by 1D processing, in 2D manifolds.…”
Section: Audio-visual Speech Recognitionmentioning
confidence: 99%
“…It has been extensively utilized in the state-of-the art of audiovisual speech recognition. [1] There are several uses of lips contour detection like Audiovisual speech authentication [3], intelligent human-computer interaction Human expression recognition [2] etc. Automatic Speech Recognition (ASR) [4] systems use only Acoustic Information, for this system show poor performance in noisy environment.…”
Section: Introductionmentioning
confidence: 99%