2018 24th International Conference on Pattern Recognition (ICPR) 2018
DOI: 10.1109/icpr.2018.8546016
|View full text |Cite
|
Sign up to set email alerts
|

Dual-modality Talking-metrics: 3D Visual-Audio Integrated Behaviometric Cues from Speakers

Abstract: Face-based behaviometrics focus on dynamic biological signatures generated from face behaviors, which are informative and subject-specific for identity recognition. Most existing face behaviometrics rely on 2D visual features and thus are sensitive to pose or intensity variations. This paper presents a dual-modality behaviometrics algorithm (talking-metrics) that integrates 3D video and audio cues from a human face speaking a passphrase. Static and dynamic 3D face features are extracted algorithmically and aud… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
1
1
1

Relationship

2
1

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…The proposed pipeline was verified on a publicly available dynamic face dataset -Speech-driven 3D Facial Motion Dataset (S3DFM) [1,22]. The dataset has multimodality data from 77 subjects covering more than 20 nationalities.…”
Section: Datasetmentioning
confidence: 99%
See 1 more Smart Citation
“…The proposed pipeline was verified on a publicly available dynamic face dataset -Speech-driven 3D Facial Motion Dataset (S3DFM) [1,22]. The dataset has multimodality data from 77 subjects covering more than 20 nationalities.…”
Section: Datasetmentioning
confidence: 99%
“…The proposed algorithms were verified on a 3D speaking face dataset (S3DFM [20,22]) 1 and has good detection performance over 100 sequences with 200 events.…”
Section: Introductionmentioning
confidence: 99%
“…For each trial, we captured a pair of 2D intensity sequences using the 3D video sensor and a synchronized audio sequence using a microphone. The synchronized audio was not used in the validation algorithm here, but it was used in another 3D video-audio recognition related research [44].…”
Section: Data Acquisitionmentioning
confidence: 99%