2007 Third International IEEE Conference on Signal-Image Technologies and Internet-Based System 2007
DOI: 10.1109/sitis.2007.37
|View full text |Cite
|
Sign up to set email alerts
|

ICA-Based Lip Feature Representation for Speaker Authentication

Abstract: 1

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 8 publications
(7 citation statements)
references
References 12 publications
0
7
0
Order By: Relevance
“…Kaynak et al [9] have conducted a comprehensive investigation about such features for lip motion analysis. For the appearance-based features, as the teeth and tongue are always appearing during the speaking process, the transforming coefficients such as Principal Component Analysis (PCA), Independent Components Analysis (ICA) and two dimensional Discrete Cosine Transform (2D-DCT) have shown their effectiveness [14], [30], [31]. Differing from the above-mentioned features that are extracted from a single frame level, the motion based features are able to reveal the temporal characteristics of lip movements [7], [13].…”
Section: A Visual Feature Extractionmentioning
confidence: 99%
“…Kaynak et al [9] have conducted a comprehensive investigation about such features for lip motion analysis. For the appearance-based features, as the teeth and tongue are always appearing during the speaking process, the transforming coefficients such as Principal Component Analysis (PCA), Independent Components Analysis (ICA) and two dimensional Discrete Cosine Transform (2D-DCT) have shown their effectiveness [14], [30], [31]. Differing from the above-mentioned features that are extracted from a single frame level, the motion based features are able to reveal the temporal characteristics of lip movements [7], [13].…”
Section: A Visual Feature Extractionmentioning
confidence: 99%
“…In the past several years, biometric features, such as fingerprint, iris and human face, have been widely used for human identity identification and authentication. Recent study [1][2][3][4][5][6][7][8][9] have shown that visual information about the lip region and its movement contain abundant speaker identity related information and can be regarded as a new biometric feature in many multi-modal person verification systems.…”
Section: Introductionmentioning
confidence: 99%
“…Many researches have proposed various lip feature representations for speaker authentication and identification [3][4][5][6][7]. For the physiological part, Luettin et al [3] employ the Active Shape Model (ASM) to describe the outer lip contour and both the lip shape and intensity profile along the contour points have been adopted to describe the static human lip.…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations