Articulatory distinctive features, as well as phonetic transcription, play important role in speech-related tasks: computerassisted pronunciation training, text-to-speech conversion (TTS), studying speech production mechanisms, speech recognition for low-resourced languages. End-to-end approaches to speech-related tasks got a lot of traction in recent years. We apply Listen, Attend and Spell (LAS) [1] architecture to phones recognition on a small small training set, like TIMIT [2]. Also, we introduce a novel decoding technique that allows to train manners and places of articulation detectors end-to-end using attention models. We also explore joint phones recognition and articulatory features detection in multitask learning setting.
This paper presents a scoring system that has shown the top result on the text subset of CALL v3 shared task. The presented system is based on text embeddings, namely NNLM [1] and BERT [2]. The distinguishing feature of the given approach is that it does not rely on the reference grammar file for scoring. The model is compared against approaches that use the grammar file and proves the possibility to achieve similar and even higher results without a predefined set of correct answers.The paper describes the model itself and the data preparation process that played a crucial role in the model training.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.