In this paper, we introduce our recent studies on human perception in audio event classification by different deep learning models. In particular, the pre-trained model VGGish is used as feature extractor to process audio data, and DenseNet is trained by and used as feature extractor for our electroencephalography (EEG) data. The correlation between audio stimuli and EEG is learned in a shared space. In the experiments, we record brain activities (EEG signals) of several subjects while they are listening to music events of 8 audio categories selected from Google AudioSet, using a 16-channel EEG headset with active electrodes. Our experimental results demonstrate that i) audio event classification can be improved by exploiting the power of human perception, and ii) the correlation between audio stimuli and EEG can be learned to complement audio event understanding.
<p>Ultrafast ultrasound has recently emerged as an alternative to traditional focused ultrasound. By virtue of the low number of insonifications it requires, ultrafast ultrasound enables the imaging of the human body at potentially very high frame rates. However, unaccounted for speed-of-sound variations in the insonified medium often result in phase aberrations in the reconstructed images. The diagnosis capability of ultrafast ultrasound is thus ultimately impeded. Therefore, there is a strong need for adaptive beamforming methods that are resilient to speed-of-sound aberrations. Several of such techniques have been proposed recently but they often lack parallelizability or the ability to directly correct both transmit and receive phase aberrations. In this article, we introduce an adaptive beamforming method designed to address these shortcomings. To do so, we compute the windowed Radon transform of several complex radio-frequency images reconstructed using delay-and-sum. Then, we apply to the obtained local sinograms weighted tensor rank-1 decompositions and their results are eventually used to reconstruct a corrected image. We demonstrate using simulated data that our method is able to successfully recover aberration-free images and that it outperforms both coherent compounding and the recently introduced SVD beamformer. Finally, we validate the proposed beamforming technique on in-vivo data, resulting in a significant improvement of image quality compared to the two reference methods.</p>
<p>Ultrafast ultrasound has recently emerged as an alternative to traditional focused ultrasound. By virtue of the low number of insonifications it requires, ultrafast ultrasound enables the imaging of the human body at potentially very high frame rates. However, unaccounted for speed-of-sound variations in the insonified medium often result in phase aberrations in the reconstructed images. The diagnosis capability of ultrafast ultrasound is thus ultimately impeded. Therefore, there is a strong need for adaptive beamforming methods that are resilient to speed-of-sound aberrations. Several of such techniques have been proposed recently but they often lack parallelizability or the ability to directly correct both transmit and receive phase aberrations. In this article, we introduce an adaptive beamforming method designed to address these shortcomings. To do so, we compute the windowed Radon transform of several complex radio-frequency images reconstructed using delay-and-sum. Then, we apply to the obtained local sinograms weighted tensor rank-1 decompositions and their results are eventually used to reconstruct a corrected image. We demonstrate using simulated data that our method is able to successfully recover aberration-free images and that it outperforms both coherent compounding and the recently introduced SVD beamformer. Finally, we validate the proposed beamforming technique on in-vivo data, resulting in a significant improvement of image quality compared to the two reference methods.</p>
Ultrafast ultrasound has recently emerged as an alternative to traditional focused ultrasound. By virtue of the low number of insonifications it requires, ultrafast ultrasound enables the imaging of the human body at potentially very high frame rates. However, unaccounted for speed-of-sound variations in the insonified medium often result in phase aberrations in the reconstructed images. The diagnosis capability of ultrafast ultrasound is thus ultimately impeded. Therefore, there is a strong need for adaptive beamforming methods that are resilient to speedof-sound aberrations. Several of such techniques have been proposed recently but they often lack parallelizability or the ability to directly correct both transmit and receive phase aberrations. In this article, we introduce an adaptive beamforming method designed to address these shortcomings. To do so, we compute the windowed Radon transform of several complex radio-frequency images reconstructed using delay-and-sum. Then, we apply to the obtained local sinograms weighted tensor rank-1 decompositions and their results are eventually used to reconstruct a corrected image. We demonstrate using simulated and in-vitro data that our method is able to successfully recover aberrationfree images and that it outperforms both coherent compounding and the recently introduced SVD beamformer. Finally, we validate the proposed beamforming technique on in-vivo data, resulting in a significant improvement of image quality compared to the two reference methods.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.