2007 IEEE Workshop on Automatic Speech Recognition &Amp; Understanding (ASRU) 2007
DOI: 10.1109/asru.2007.4430119
|View full text |Cite
|
Sign up to set email alerts
|

Agglomerative information bottleneck for speaker diarization of meetings data

Abstract: Abstract. In this paper, we investigate the use of agglomerative Information Bottleneck (aIB) clustering for the speaker diarization task of meetings data. In contrary to the state-of-the-art diarization systems that models individual speakers with Gaussian Mixture Models, the proposed algorithm is completely non parametric . Both clustering and model selection issues of nonparametric models are addressed in this work. The proposed algorithm is evaluated on meeting data on the RT06 evaluation data set. The sys… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
37
0

Year Published

2008
2008
2017
2017

Publication Types

Select...
3
2
2

Relationship

2
5

Authors

Journals

citations
Cited by 27 publications
(37 citation statements)
references
References 7 publications
0
37
0
Order By: Relevance
“…We summarize here the speaker diarization algorithm described in detail in [3]. The clustering steps are described below.…”
Section: Speaker Diarization Algorithmmentioning
confidence: 99%
See 1 more Smart Citation
“…We summarize here the speaker diarization algorithm described in detail in [3]. The clustering steps are described below.…”
Section: Speaker Diarization Algorithmmentioning
confidence: 99%
“…Previously we have proposed a system [3] based on the Information Bottleneck (IB) principle [4] which is inspired from RateDistortion theory. The speaker diarization aims at finding the clustering that minimize the loss in mutual information between the initial uniform segmentation and the final clustering.…”
Section: Introductionmentioning
confidence: 99%
“…• Method 1 (aIB): based on conventional aIB+model selection [15]. This method starts with the trivial partition of each element of X in a cluster and performs agglomerative IB clustering until all elements in the space are grouped into a single cluster.…”
Section: Agglomerative and Sequential Ibmentioning
confidence: 99%
“…However, this does not give any further information on the optimal number of clusters which must be estimated with a model selection criterion. In [15] we considered two different model selection metrics: the Minimum Description Length (MDL) and a thresholded Normalized Mutual Information (NMI). NMI can be written as…”
Section: Agglomerative and Sequential Ibmentioning
confidence: 99%
See 1 more Smart Citation