2019
DOI: 10.1101/722207
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Acoustic contamination of electrophysiological brain signals during speech production and sound perception

Abstract: A current challenge of neurotechnologies is the development of speech brain-computer interfaces to restore communication in people unable to speak. To achieve a proof of concept of such system, neural activity of patients implanted for clinical reasons can be recorded while they speak. Using such simultaneously recorded audio and neural data, decoders can be built to predict speech features using features extracted from brain signals. A typical neural feature is the spectral power of field potentials in the hi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

3
10
1

Year Published

2020
2020
2022
2022

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 8 publications
(14 citation statements)
references
References 30 publications
3
10
1
Order By: Relevance
“…To compare spectral content between recorded audio and electrodes ( Fig. 5A,B) , we convolved the voltage time series of each electrode and also the audio channel with a 200 ms Hamming window and then computed the power spectral density (PSD) in non-overlapping bins using a short-time Fourier transform (as in Roussel et al 2019). We isolated 'voicing epochs' in which to compare audio and neural power time series by sub-selecting time points with summed audio power (across all frequencies) in the top~10% of values across all audio data.…”
Section: Quantifying Acoustic Artifact and Linear Regression Reference (Lrr) Decontaminationmentioning
confidence: 99%
See 1 more Smart Citation
“…To compare spectral content between recorded audio and electrodes ( Fig. 5A,B) , we convolved the voltage time series of each electrode and also the audio channel with a 200 ms Hamming window and then computed the power spectral density (PSD) in non-overlapping bins using a short-time Fourier transform (as in Roussel et al 2019). We isolated 'voicing epochs' in which to compare audio and neural power time series by sub-selecting time points with summed audio power (across all frequencies) in the top~10% of values across all audio data.…”
Section: Quantifying Acoustic Artifact and Linear Regression Reference (Lrr) Decontaminationmentioning
confidence: 99%
“…This in turn can exaggerate neural differences between phonemes by artificially shifting what are actually condition-invariant neural signal components. The second confound, which was recently raised by Roussel and colleagues (Roussel et al, 2019) , is that mechanical vibrations due to speaking might cause microphonic artifacts in the neural recordings. Our analyses suggest that while these confounds most likely do inflate speech decoding performance, their effects are not large.…”
Section: Introductionmentioning
confidence: 99%
“…Heeding this report, we examined our results in this light. We did not find evidence for presence of a mechanical-electrical artifact, given the observations that (a) the ECoG-to-audio correlation we report showed a lot of variability across different parts of the film, which would not be expected if acoustic waves (present throughout the movie) were driving ECoG signals; (b) ECoG-to-audio correlations were only significant at a temporal lag of up to 300 ms (Figures 2a and S5); (c) the effects we report are present at a lower frequency range than the 115 Hz and up that Roussel et al (2019) report (recalculated and shown in Figure S5).…”
Section: The Role Of Dpcc In Speech Perceptionmentioning
confidence: 51%
“…A recent report (Roussel et al, 2019) raised a possibility that audio signals may affect the integrity of ECoG data, because of a specific wiring setup and injection of mechanically‐induced electrical noise. Heeding this report, we examined our results in this light.…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation