2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP) 2017
DOI: 10.1109/icassp.2017.7953258
|View full text |Cite
|
Sign up to set email alerts
|

Bayesian phonotactic Language Model for Acoustic Unit Discovery

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
19
1

Year Published

2017
2017
2021
2021

Publication Types

Select...
4
1

Relationship

3
2

Authors

Journals

citations
Cited by 8 publications
(20 citation statements)
references
References 8 publications
0
19
1
Order By: Relevance
“…Surprisingly, the MBN features perform relatively poorly compared to the standard MFCC. These results are contradictory to those reported in [4]. Two factors may explain this discrepancy: the Mboshi5k data being different from the training data of the MBN neural network, the neural network may not generalize well.…”
Section: Resultscontrasting
confidence: 79%
See 3 more Smart Citations
“…Surprisingly, the MBN features perform relatively poorly compared to the standard MFCC. These results are contradictory to those reported in [4]. Two factors may explain this discrepancy: the Mboshi5k data being different from the training data of the MBN neural network, the neural network may not generalize well.…”
Section: Resultscontrasting
confidence: 79%
“…If no match is found with a true phone boundary, this is considered to be a false negative. The consistency of the units was evaluated in term of normalized mutual information (NMI -see [2,4,6] for details) which measures the statistical dependency between the units and the forced aligned phones. A NMI of 0 % means that the units are completely independent of the phones whereas a NMI of 100 % indicates that the actual phones could be retrieved without error given the sequence of discovered units.…”
Section: Acoustic Unit Discovery (Aud) Evaluationmentioning
confidence: 99%
See 2 more Smart Citations
“…In addition, such methods may shed light on how human infants acquire language [2,3]. 1 https://github.com/kamperh/recipe_zs2017_track2 Several zero-resource tasks have been studied, including acoustic unit discovery [4][5][6], unsupervised representation learning [7][8][9], query-by-example search [10,11] and topic modelling [12,13]. Early work mainly focused on unsupervised term discovery, where the aim is to automatically find repeated word-or phrase-like patterns in a collection of speech [14][15][16].…”
Section: Introductionmentioning
confidence: 99%