2019
DOI: 10.1007/978-3-030-33904-3_64
|View full text |Cite
|
Sign up to set email alerts
|

Multi-channel Convolutional Neural Networks for Automatic Detection of Speech Deficits in Cochlear Implant Users

Abstract: This paper proposes a methodology for automatic detection of speech disorders in Cochlear Implant users by implementing a multi-channel Convolutional Neural Network. The model is fed with a 2-channel input which consists of two spectrograms computed from the speech signals using Mel-scaled and Gammatone filter banks. Speech recordings of 107 cochlear implant users (aged between 18 and 89 years old) and 94 healthy controls (aged between 20 and 64 years old) are considered for the tests. According to the results… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
2

Relationship

0
2

Authors

Journals

citations
Cited by 2 publications
(1 citation statement)
references
References 9 publications
(8 reference statements)
0
1
0
Order By: Relevance
“…The proposed approach is then evaluated in two speech processing applications: automatic detection of disordered speech of cochlear implant (CI) users and phoneme class recognition to extract phone-attribute features. In our previous work [12], we showed that combining at least two different time-frequency representations of the signals can improve the automatic detection of speech deficits in CI users by training a bi-class CNN to differentiate between speech signals from CI users and healthy control (HC) speakers. This paper extends the use of multi-channel spectrograms to phoneme recognition using recurrent neural networks with convolutional layers (CRNN).…”
Section: Introductionmentioning
confidence: 99%
“…The proposed approach is then evaluated in two speech processing applications: automatic detection of disordered speech of cochlear implant (CI) users and phoneme class recognition to extract phone-attribute features. In our previous work [12], we showed that combining at least two different time-frequency representations of the signals can improve the automatic detection of speech deficits in CI users by training a bi-class CNN to differentiate between speech signals from CI users and healthy control (HC) speakers. This paper extends the use of multi-channel spectrograms to phoneme recognition using recurrent neural networks with convolutional layers (CRNN).…”
Section: Introductionmentioning
confidence: 99%