1997
DOI: 10.1016/s0167-6393(97)00004-6
|View full text |Cite
|
Sign up to set email alerts
|

A two-level classifier for text-independent speaker identification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

1999
1999
2021
2021

Publication Types

Select...
4
4

Relationship

0
8

Authors

Journals

citations
Cited by 9 publications
(3 citation statements)
references
References 7 publications
0
3
0
Order By: Relevance
“…With no or very limited a priori knowledge about the processes studied, the empirical ANN model compensates for its inherent information inadequacy by requiring fairly large and well-spread training sets (38). Once a good training set of input-;output data is available, however, ANN models can prove useful for specific applications.…”
Section: Resultsmentioning
confidence: 99%
“…With no or very limited a priori knowledge about the processes studied, the empirical ANN model compensates for its inherent information inadequacy by requiring fairly large and well-spread training sets (38). Once a good training set of input-;output data is available, however, ANN models can prove useful for specific applications.…”
Section: Resultsmentioning
confidence: 99%
“…A two level classifier for closed‐set speaker identification was presented by Hadjitodorov et al . (). The paper investigates different versions of a SOM as a first stage classifier to obtain a prototype distribution map and then use these maps to feed a second‐stage MLP network classifier.…”
Section: Related Workmentioning
confidence: 97%
“…The artificial neural networks (ANN) that are known as a simplified model of biological nervous system (Katagiri, 2000;Bishop, 1995) allow for such observation of the non-fluent speech analysis. That is why the growing trend in the ANN or ANN and HMMs hybrid applications could be observed in such areas as speech and speaker recognition (Farrell, 2000;Farrell et al, 1994;Hadjitodorov et al, 1997;Leung et al, 2007;Trentin and Giuliani, 2001) and classification (Hosom, 2003;Cosi et al, 2000;Kocsor et al, 2000;Lee et al, 1998) or feature extraction (Katagiri, 2000;Gemello et al, 2007;Fritsch et al, 2000;Yegnanarayana and Narendranath, 2000;Shao and Barker, 2008;Uncini, 2003). The Multilayer Perceptron (MLP) and Radial Basis Function networks (RBF) as well as recurrent and fuzzy networks frequently occur in the automatic speech recognition (ASR) process (Chen et al, 1996;Farrell, 2000;Leung et al, 2007;Schuster, 2000).…”
Section: Introductionmentioning
confidence: 99%