Conventional photoacoustic imaging may suffer from the limited view and bandwidth of ultrasound transducers. A deep learning approach is proposed to handle these problems and is demonstrated both in simulations and in experiments on a multi-scale model of leaf skeleton. We employed an experimental approach to build the training and the test sets using photographs of the samples as ground truth images. Reconstructions produced by the neural network show a greatly improved image quality as compared to conventional approaches. In addition, this work aimed at quantifying the reliability of the neural network predictions. To achieve this, the dropout Monte-Carlo procedure is applied to estimate a pixel-wise degree of confidence on each predicted picture. Last, we address the possibility to use transfer learning with simulated data in order to drastically limit the size of the experimental dataset.
Photoacoustic fluctuation imaging, which exploits randomness in photoacoustic generation, provides enhanced images in terms of resolution and visibility, as compared to conventional photoacoustic images. While a few experimental demonstrations of photoacoustic fluctuation imaging have been reported, it has to date not been described theoretically. In the first part of this work, we propose a theory relevant to fluctuations induced either by random illumination patterns or by random distributions of absorbing particles. The theoretical predictions are validated by Monte Carlo finite-difference time-domain simulations of photoacoustic generation in random particle media. We provide a physical insight into why visibility artefacts are absent from second-order fluctuation images. In the second part, we demonstrate experimentally that harnessing randomness induced by the flow of red blood cells produce photoacoustic fluctuation images free of visibility artefacts. As a first proof of concept, we obtain two-dimensional images of blood vessel phantoms. Photoacoustic fluctuation imaging is finally applied in vivo to obtain 3D images of the vascularization in a chicken embryo.
We present a method and setup that provide complementary three-dimensional (3D) images of blood oxygenation (via quantitative photoacoustic imaging) and blood flow dynamics (via ultrasound Doppler). The proposed approach is label-free and exploits blood-induced fluctuations, and is implemented on a sparse array with only 256 elements, driven with a commercially available ultrasound electronics. We first implement 3D photoacoustic fluctuation imaging (PAFI) to image chicken embryo, and obtain full-visibility images of the vascular morphology. We obtain simultaneously 3D ultrasound power Doppler with a comparable image quality. We then introduce multispectral photoacoustic fluctuation imaging (MS-PAFI), and demonstrate that it can provide quantitative measurements of the absorbed optical energy density with full visibility and enhanced contrast, as compared to conventional delay-and-sum multispectral photoacoustic imaging. We finally showcase the synergy and complementarity between MS-PAFI, which provides 3D quantitative oxygenation (SO$$_2$$
2
) imaging, and 3D ultrasound Doppler, which provides quantitative information on blood flow dynamics. MS-PAFI represents a promising alternative to model-based inversions with the advantage of resolving all the visibility artefacts without prior and regularization, by use of a straightforward processing scheme.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.