Effects of skin tone on photoacoustic imaging and oximetry
Thomas R. Else,
Lina Hacker,
Janek Gröhl
et al.
Abstract:.
Significance
Photoacoustic imaging (PAI) provides contrast based on the concentration of optical absorbers in tissue, enabling the assessment of functional physiological parameters such as blood oxygen saturation (
). Recent evidence suggests that variation in melanin levels in the epidermis leads to measurement biases in optical technologies, which could potentially limit the application of these biomarkers in diverse populations.
Aim
To … Show more
“…Additionally, extending applicability to a wider range of tissue types and chromophores, beyond those included in its initial training, would be important. All of the animal studies undertaken here were in nude mice, which lack skin pigmentation, however, skin tone is a consideration that is gaining greater attention in the PAI community 53 and data from a range of skin tones would be needed to maximise applicability of VAN-GAN in future.…”
Innovations in imaging hardware have led to a revolution in our ability to visualise vascular networks in 3D at high resolution. The segmentation of microvascular networks from these 3D image volumes and interpretation of their meaning in the context of physiological and pathological processes unfortunately remains a time consuming and error-prone task. Deep learning has the potential to solve this problem, but current supervised analysis frameworks require human-annotated ground truth labels. To overcome these limitations, we present an unsupervised image-to-image translation deep learning model called the vessel segmentation generative adversarial network (VAN-GAN). VAN-GAN integrates synthetic blood vessel networks that closely resemble real-life anatomy into its training process and learns to replicate the underlying physics of an imaging system in order to learn how to segment vasculature from 3D biomedical images. To demonstrate the potential of VAN-GAN, the framework was applied to the challenge of segmenting vascular networks from images acquired using mesoscopic photoacoustic imaging (PAI). With a variety of in silico, in vitro and in vivo pathophysiological data, including patient-derived breast cancer xenograft models, we show that VAN-GAN facilitates accurate and unbiased segmentation of 3D vascular networks from PAI volumes. By leveraging synthetic data to reduce the reliance on manual labelling, VAN-GAN lower the barriers to entry for high-quality blood vessel segmentation to benefit imaging studies of vascular structure and function.
“…Additionally, extending applicability to a wider range of tissue types and chromophores, beyond those included in its initial training, would be important. All of the animal studies undertaken here were in nude mice, which lack skin pigmentation, however, skin tone is a consideration that is gaining greater attention in the PAI community 53 and data from a range of skin tones would be needed to maximise applicability of VAN-GAN in future.…”
Innovations in imaging hardware have led to a revolution in our ability to visualise vascular networks in 3D at high resolution. The segmentation of microvascular networks from these 3D image volumes and interpretation of their meaning in the context of physiological and pathological processes unfortunately remains a time consuming and error-prone task. Deep learning has the potential to solve this problem, but current supervised analysis frameworks require human-annotated ground truth labels. To overcome these limitations, we present an unsupervised image-to-image translation deep learning model called the vessel segmentation generative adversarial network (VAN-GAN). VAN-GAN integrates synthetic blood vessel networks that closely resemble real-life anatomy into its training process and learns to replicate the underlying physics of an imaging system in order to learn how to segment vasculature from 3D biomedical images. To demonstrate the potential of VAN-GAN, the framework was applied to the challenge of segmenting vascular networks from images acquired using mesoscopic photoacoustic imaging (PAI). With a variety of in silico, in vitro and in vivo pathophysiological data, including patient-derived breast cancer xenograft models, we show that VAN-GAN facilitates accurate and unbiased segmentation of 3D vascular networks from PAI volumes. By leveraging synthetic data to reduce the reliance on manual labelling, VAN-GAN lower the barriers to entry for high-quality blood vessel segmentation to benefit imaging studies of vascular structure and function.
“…It is worth noting that also NIRS devices (due to the similarity with pulse oximeters 25 ) and -more generallyall light-based devices might be influenced by skin pigmentation. Recent studies on the photoacoustic technique demonstrated measurement bias, including an overestimation of StO2, in darker skin types 26,27 .…”
Recently, skin pigmentation has been shown to affect the performance of pulse oximeters and other light-based techniques like photoacoustic imaging, tissue oximetry, and continuous wave near infrared spectroscopy. Evaluating the robustness to changes in skin pigmentation is therefore essential for the proper use of optical technologies in the clinical scenario. We conducted systematic time domain near infrared spectroscopy measurements on calibrated tissue phantoms and in vivo on volunteers during static and dynamic (i.e., arterial occlusion) measurements. To simulate varying melanosome volume fractions in the skin, we inserted, between the target sample and the measurement probe, thin tissue phantoms made of silicone and nigrosine (skin phantoms). Additionally, we conducted an extensive measurement campaign on a large cohort of pediatric subjects, covering the full spectrum of skin pigmentation. Our findings consistently demonstrate that skin pigmentation has a negligible effect on time domain near infrared spectroscopy results, underscoring the reliability and potential of this emerging technology in diverse clinical settings.
Mesoscopic photoacoustic imaging (PAI) enables label‐free visualization of vascular networks in tissues with high contrast and resolution. Segmenting these networks from 3D PAI data and interpreting their physiological and pathological significance is crucial yet challenging due to the time‐consuming and error‐prone nature of current methods. Deep learning offers a potential solution; however, supervised analysis frameworks typically require human‐annotated ground‐truth labels. To address this, an unsupervised image‐to‐image translation deep learning model is introduced, the Vessel Segmentation Generative Adversarial Network (VAN‐GAN). VAN‐GAN integrates synthetic blood vessel networks that closely resemble real‐life anatomy into its training process and learns to replicate the underlying physics of the PAI system in order to learn how to segment vasculature from 3D photoacoustic images. Applied to a diverse range of in silico, in vitro, and in vivo data, including patient‐derived breast cancer xenograft models and 3D clinical angiograms, VAN‐GAN demonstrates its capability to facilitate accurate and unbiased segmentation of 3D vascular networks. By leveraging synthetic data, VAN‐GAN reduces the reliance on manual labeling, thus lowering the barrier to entry for high‐quality blood vessel segmentation (F1 score: VAN‐GAN vs. U‐Net = 0.84 vs. 0.87) and enhancing preclinical and clinical research into vascular structure and function.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.