Deep learning (DL) has proved successful in medical imaging and, in the wake of the recent COVID-19 pandemic, some works have started to investigate DLbased solutions for the assisted diagnosis of lung diseases. While existing works focus on CT scans, this paper studies the application of DL techniques for the analysis of lung ultrasonography (LUS) images. Specifically, we present a novel fully-annotated dataset of LUS images collected from several Italian hospitals, with labels indicating the degree of disease severity at a frame-level, videolevel, and pixel-level (segmentation masks). Leveraging these data, we introduce several deep models that address relevant tasks for the automatic analysis of LUS images. In particular, we present a novel deep network, derived from Spatial Transformer Networks, which simultaneously predicts the disease severity score associated to a input frame and provides localization of pathological artefacts in a weakly-supervised way. Furthermore, we introduce a new method based on uninorms for effective frame score aggregation at a video-level. Finally, we benchmark state of the art deep models for estimating pixel-level segmentations of COVID-19 imaging biomarkers. Experiments on the proposed dataset demonstrate satisfactory results on all the considered tasks, paving the way to future research on DL for the assisted diagnosis of COVID-19 from LUS data.
3-D contrast enhanced ultrasound enables better visualization of inherently 3-D vascular geometries compared to an intersecting plane. Additionally, it would allow the application of motion correction techniques for all directions. Both contrast detection and motion correction work better on high-frame rate data. However high-frame rate 3-D ultrasound imaging with dense matrix arrays is challenging to realize. Sparse arrays alleviate some of the limitations in cable count and data rate that fully populated arrays encounter, but their increased level of secondary lobes negatively impacts image contrast. Meanwhile the use of unfocused transmit beams needed to achieve highframe rates negatively impacts resolution. Here we propose to use adaptive beamforming by deep learning (ABLE) to improve the image quality of contrast enhanced ultrasound images acquired with a sparse spiral array. We train the neural network on simulated data and evaluate simulated images and in vivo images of an ex ovo chicken embryo. ABLE improved resolution compared to delay-and-sum (DAS) and spatial coherence (SC) beamforming on the simulated and in vivo data. The qualitative improvements persist after histogram matching, indicating that the image quality improvement of the ABLE images was not purely due to dynamic range stretching.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.