2017
DOI: 10.1109/tbme.2016.2587382
|View full text |Cite
|
Sign up to set email alerts
|

EEG-Informed Attended Speaker Extraction From Recorded Speech Mixtures With Application in Neuro-Steered Hearing Prostheses

Abstract: Current research on AAD always assumes the availability of the clean speech signals, which limits the applicability in real settings. We have extended this research to detect the attended speaker even when only microphone recordings with noisy speech mixtures are available. This is an enabling ingredient for new brain-computer interfaces and effective filtering schemes in neuro-steered hearing prostheses. Here, we provide a first proof of concept for EEG-informed attended speaker extraction and denoising.

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
145
0

Year Published

2017
2017
2024
2024

Publication Types

Select...
4
2
1

Relationship

2
5

Authors

Journals

citations
Cited by 122 publications
(145 citation statements)
references
References 25 publications
(69 reference statements)
0
145
0
Order By: Relevance
“…For example, to design so-called neuro-steered hearing prostheses [5], [6], a small EEG module can be integrated into a hearing prosthesis for in-the-ear [7] or around-theear [8] EEG recordings, and even the implanted electrodes of a cochlear implant can be used to record EEG [9]. Such small EEG modules are discreet, albeit limited in number of channels and spatial coverage.…”
Section: Introductionmentioning
confidence: 99%
“…For example, to design so-called neuro-steered hearing prostheses [5], [6], a small EEG module can be integrated into a hearing prosthesis for in-the-ear [7] or around-theear [8] EEG recordings, and even the implanted electrodes of a cochlear implant can be used to record EEG [9]. Such small EEG modules are discreet, albeit limited in number of channels and spatial coverage.…”
Section: Introductionmentioning
confidence: 99%
“…We aim to enhance the speech component of one speaker, while suppressing that of the other speakers and noise. As described in [3], [12], this can be achieved using a multi-channel Wiener filter (MWF) w(ω), that extracts the attended speech streams att (ω) = w(ω) H y(ω) provided that we have the knowledge of the times at which the attended speaker is active (superscript H denotes the conjugate transpose operator).…”
Section: A Data Model and Problem Statementmentioning
confidence: 99%
“…However, in a multi-speaker scenario, a fundamental challenge is to determine which speaker the listener actually aims to focus on. Therefore, incorporating a brain-computer interface to infer the auditory attention of the listener opens up an interesting field of research aiming to build smarter hearing prostheses [3].…”
Section: Introductionmentioning
confidence: 99%
See 2 more Smart Citations