6th Workshop on Spoken Language Technologies for Under-Resourced Languages (SLTU 2018) 2018
DOI: 10.21437/sltu.2018-43
|View full text |Cite
|
Sign up to set email alerts
|

Building Speech Recognition Systems for Language Documentation: The CoEDL Endangered Language Pipeline and Inference System (ELPIS)

Abstract: Machine learning has revolutionised speech technologies for major world languages, but these technologies have generally not been available for the roughly 4,000 languages with populations of fewer than 10,000 speakers. This paper describes the development of Elpis, a pipeline which language documentation workers with minimal computational experience can use to build their own speech recognition models, resulting in models being built for 16 languages from the Asia-Pacific region. Elpis puts machine learning s… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
5

Citation Types

0
15
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
2

Relationship

1
8

Authors

Journals

citations
Cited by 107 publications
(16 citation statements)
references
References 1 publication
0
15
0
Order By: Relevance
“…Elpis 1 is a tool created to allow language workers with minimal computational experience to build their own speech recognition models and automatically transcribe audio (Foley et al, 2018. Elpis uses the Kaldi 2 automatic speech recognition (ASR) toolkit (Povey et al, 2011) as its backend.…”
Section: Introductionmentioning
confidence: 99%
“…Elpis 1 is a tool created to allow language workers with minimal computational experience to build their own speech recognition models and automatically transcribe audio (Foley et al, 2018. Elpis uses the Kaldi 2 automatic speech recognition (ASR) toolkit (Povey et al, 2011) as its backend.…”
Section: Introductionmentioning
confidence: 99%
“…It was reported that the performance of MFCC based systems decreases with decreasing frame size [9,10]. Classifiers like Hidden Markov Model (HMM) [12], vector quantization (VQ) [6], support vector machine (SVM) [3,13,14], artificial neural network (ANN) [15,16], and Gaussian mixture model (GMM) [15][16][17] have been reported to model feature vectors in SLID systems. One of the simplest techniques used for the SLID system is GMM-UBM.…”
Section: Introductionmentioning
confidence: 99%
“…In this method, maximum likelihood estimation is used to train the language model, and maximum a posterior (MAP) estimation is used to adapt the UBM model. The speech sample is a series of the independent spectral feature vector, and GMM mathematically models these features with UBM adaption known as GMM-UBM supervectors carries spectral characteristics [10][11][12][13][14][15][16][17][18]. These features are adapted to UBM using the MAP estimation algorithm to obtain utterance-based GMM [19].…”
Section: Introductionmentioning
confidence: 99%
“…Indigenous and other minority languages usually have few transcribed audio recordings, and so adapting data-hungry ASR algorithms to assist in their documentation is an active area of research (Besacier et al, 2014;Jimerson and Prud'hommeaux, 2018;Michaud et al, 2019;Foley et al, 2018;Gupta and Boulianne, 2020b,a;Zahrer et al, 2020;Thai et al, 2019;Li et al, 2020;Zevallos et al, 2019;Matsuura et al, 2020;Levow et al, 2021). This paper will examine an element that might appear obvious at first, but one where the literature is "inconclusive" (Adams, 2018), and which can have major consequences in performance: How should tones be transcribed when dealing with extremely low-resource languages?…”
Section: Introductionmentioning
confidence: 99%