Abstract:In this paper, we propose an efficient technique for estimating individual power spectral density (PSD) components, i.e., PSD of each desired sound source as well as of noise and reverberation, in a multi-source reverberant sound scene with coherent background noise. We formulate the problem in the spherical harmonics domain to take the advantage of the inherent orthogonality of the spherical harmonics basis functions and extract the PSD components from the crosscorrelation between the different sound field mo… Show more
“…The resulting ERPs show the expected P3 response for target tones only (Polich, 2007). Advancing from this simple validation task to everyday life settings, PSD information could be used to differentiate between different sound sources (e.g., Fahim, Samarasinghe, & Abhayapala, 2018). For example, in a two speaker scenario, PSD can be used to identify which speaker (low vs. high voice) is currently talking.…”
With smartphone-based mobile electroencephalography (EEG), we can investigate sound perception beyond the lab. To understand sound perception in the real world, we need to relate naturally occurring sounds to EEG data. For this, EEG and audio information need to be synchronized precisely, only then it is possible to capture fast and transient evoked neural responses and relate them to individual sounds. We have developed Android applications (AFEx and Record-a) that allow for the concurrent acquisition of EEG data and audio features, i.e., sound onsets, average signal power (RMS) and power spectral density (PSD) on smartphone. In this paper, we evaluate these apps by computing event-related potentials (ERPs) evoked by everyday sounds. One participant listened to piano notes (played live by a pianist) and to a home-office soundscape. Timing tests showed that the temporal precision of the system is very good. We calculated ERPs to sound onsets and observed the typical P1-N1-P2 complex of auditory processing. Furthermore, we show how to relate information on loudness (RMS) and spectra (PSD) to brain activity. In future studies, we can use this system to study sound processing in everyday life.
“…The resulting ERPs show the expected P3 response for target tones only (Polich, 2007). Advancing from this simple validation task to everyday life settings, PSD information could be used to differentiate between different sound sources (e.g., Fahim, Samarasinghe, & Abhayapala, 2018). For example, in a two speaker scenario, PSD can be used to identify which speaker (low vs. high voice) is currently talking.…”
With smartphone-based mobile electroencephalography (EEG), we can investigate sound perception beyond the lab. To understand sound perception in the real world, we need to relate naturally occurring sounds to EEG data. For this, EEG and audio information need to be synchronized precisely, only then it is possible to capture fast and transient evoked neural responses and relate them to individual sounds. We have developed Android applications (AFEx and Record-a) that allow for the concurrent acquisition of EEG data and audio features, i.e., sound onsets, average signal power (RMS) and power spectral density (PSD) on smartphone. In this paper, we evaluate these apps by computing event-related potentials (ERPs) evoked by everyday sounds. One participant listened to piano notes (played live by a pianist) and to a home-office soundscape. Timing tests showed that the temporal precision of the system is very good. We calculated ERPs to sound onsets and observed the typical P1-N1-P2 complex of auditory processing. Furthermore, we show how to relate information on loudness (RMS) and spectra (PSD) to brain activity. In future studies, we can use this system to study sound processing in everyday life.
“…Figure 1b depicts the localization accuracy of all estimators with a function of the reverberation levels in the range T60 = {0.2, 0.3, 0.4, 0.5, 0.6} s. All the estimators are still useable even when the T60 = 0.6 s. A stronger reverberation time implies the direct-path is contaminated by the acoustic reflections, thus all the estimators' localization accuracy degrades. It is observed 3 https://www.audiolabs-erlangen.de/fau/professor/habets/software/rirgenerator that the performance of the 'Gradient descent' estimator deteriorates more severally than the '2-D search' and 'Decoupled' estimators in more reverberant environments. This may be attributed to the sensitivity of gradient descent search in (13) to the acoustic reflections as compared with that are matching between the estimated and the theoretical RHC.…”
Section: Methodsmentioning
confidence: 99%
“…In the past decades, source direction-of-arrival (DOA) estimation [1,2] has been extensively investigated in the research community since it is an essential component in many spatial signal processing techniques and applications including source dereverberation, speech separation [3], automatic speech recognition [4] and automated camera steering [5].…”
A spherical harmonics domain source feature called relative harmonic coefficients (RHC) has recently been applied to address the source direction-of-arrival (DOA) estimation problem. This paper presents a compact evaluation and comparison between two existing RHC based DOA estimators: (i) a method using a full grid search over the two-dimensional (2-D) directional space, (ii) a decoupled estimator which uses one-dimensional (1-D) search to separately localize the source's elevation and azimuth. We also propose a new estimator using a gradient descent search over the 2-D directional grid space. Extensive experiments in both simulated and real-life environments are conducted to examine and analyze the performance of all the underlying DOA estimators. Two objective metrics, including localization accuracy and algorithm complexity, are adopted for an evaluation and comparison between all estimators.
We propose a novel multi-source direction of arrival (DOA) estimation technique using a convolutional neural network algorithm which learns the modal coherence patterns of an incident soundfield through measured spherical harmonic coefficients. We train our model for individual time-frequency bins in the short-time Fourier transform spectrum by analyzing the unique snapshot of modal coherence for each desired direction. The proposed method is capable of estimating simultaneously active multiple sound sources on a 3D space using a single-source training scheme. This single-source training scheme reduces the training time and resource requirements as well as allows the reuse of the same trained model for different multi-source combinations. The method is evaluated against various simulated and practical noisy and reverberant environments with varying acoustic criteria and found to outperform the baseline methods in terms of DOA estimation accuracy. Furthermore, the proposed algorithm allows independent training of azimuth and elevation during a full DOA estimation over 3D space which significantly improves its training efficiency without affecting the overall estimation accuracy.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.