Wearable devices monitoring food intake through passive sensing is slowly emerging to complement self-reporting of users' caloric intake and eating behaviors. Though the ultimate goal for the passive sensing of eating is to become a reliable gold standard in dietary assessment, it is currently showing promise as a means of validating self-report measures. Continuous food-intake monitoring allows for the validation and refusal of users' reported data in order to obtain more reliable user information, resulting in more effective health intervention services. Recognizing the importance and strength of wearable sensors in food intake monitoring, there has been a variety of approaches proposed and studied in recent years. While existing technologies show promise, many challenges and opportunities discussed in this survey, still remain. This paper presents a meticulous review of the latest sensing platforms and data analytic approaches to solve the challenges of food-intake monitoring, ranging from ear-based chewing and swallowing detection systems that capture eating gestures to wearable cameras that identify food types and caloric content through image processing techniques. This paper focuses on the comparison of different technologies and approaches that relate to user comfort, body location, and applications for medical research. We identify and summarize the forthcoming opportunities and challenges in wearable food intake monitoring technologies.
Oneprimary technical challenge in photoacoustic microscopy (PAM) is the necessary compromise between spatial resolution and imaging speed. In this study, we propose a novel application of deep learning principles to reconstruct undersampled PAM images and transcend the trade-off between spatial resolution and imaging speed. We compared various convolutional neural network (CNN) architectures, and selected a fully dense U-net (FD U-net) model that produced the best results. To mimic various undersampling conditions in practice, we artificially downsampled fullysampled PAM images of mouse brain vasculature at different ratios. This allowed us to not only definitively establish the ground truth, but also train and test our deep learning model at various imaging conditions. Our results and numerical analysis have collectively demonstrated the robust performance of our model to reconstruct PAM images with as few as 2% of the original pixels, which may effectively shorten the imaging time without substantially sacrificing the image quality.
With balanced spatial resolution, penetration depth, and imaging speed, photoacoustic computed tomography (PACT) is promising for clinical translation such as in breast cancer screening, functional brain imaging, and surgical guidance. Typically using a linear ultrasound (US) transducer array, PACT has great flexibility for hand-held applications. However, the linear US transducer array has a limited detection angle range and frequency bandwidth, resulting in limited-view and limited-bandwidth artifacts in the reconstructed PACT images. These artifacts significantly reduce the imaging quality. To address these issues, existing solutions often have to pay the price of system complexity, cost, and/or imaging speed. Here, we propose a deep-learning-based method that explores the Wasserstein generative adversarial network with gradient penalty (WGAN-GP) to reduce the limited-view and limited-bandwidth artifacts in PACT. Compared with existing reconstruction and convolutional neural network approach, our model has shown improvement in imaging quality and resolution. Our results on simulation, phantom, and in vivo data have collectively demonstrated the feasibility of applying WGAN-GP to improve PACT’s image quality without any modification to the current imaging set-up. Impact statement This study has the following main impacts. It offers a promising solution for removing limited-view and limited-bandwidth artifact in PACT using a linear-array transducer and conventional image reconstruction, which have long hindered its clinical translation. Our solution shows unprecedented artifact removal ability for in vivo image, which may enable important applications such as imaging tumor angiogenesis and hypoxia. The study reports, for the first time, the use of an advanced deep-learning model based on stabilized generative adversarial network. Our results have demonstrated its superiority over other state-of-the-art deep-learning methods.
High-speed high-resolution imaging of the whole-brain hemodynamics is critically important to facilitating neurovascular research. High imaging speed and image quality are crucial to visualizing real-time hemodynamics in complex brain vascular networks, and tracking fast pathophysiological activities at the microvessel level, which will enable advances in current queries in neurovascular and brain metabolism research, including stroke, dementia, and acute brain injury. Further, real-time imaging of oxygen saturation of hemoglobin (sO2) can capture fast-paced oxygen delivery dynamics, which is needed to solve pertinent questions in these fields and beyond. Here, we present a novel ultrafast functional photoacoustic microscopy (UFF-PAM) to image the whole-brain hemodynamics and oxygenation. UFF-PAM takes advantage of several key engineering innovations, including stimulated Raman scattering (SRS) based dual-wavelength laser excitation, water-immersible 12-facet-polygon scanner, high-sensitivity ultrasound transducer, and deep-learning-based image upsampling. A volumetric imaging rate of 2 Hz has been achieved over a field of view (FOV) of 11 × 7.5 × 1.5 mm3 with a high spatial resolution of ~10 μm. Using the UFF-PAM system, we have demonstrated proof-of-concept studies on the mouse brains in response to systemic hypoxia, sodium nitroprusside, and stroke. We observed the mouse brain’s fast morphological and functional changes over the entire cortex, including vasoconstriction, vasodilation, and deoxygenation. More interestingly, for the first time, with the whole-brain FOV and micro-vessel resolution, we captured the vasoconstriction and hypoxia simultaneously in the spreading depolarization (SD) wave. We expect the new imaging technology will provide a great potential for fundamental brain research under various pathological and physiological conditions.
Photoacoustic tomography (PAT), or optoacoustic tomography, has achieved remarkable progress in the past decade, benefiting from the joint developments in optics, acoustics, chemistry, computing and mathematics. Unlike pure optical or ultrasound imaging, PAT can provide unique optical absorption contrast as well as widely scalable spatial resolution, penetration depth and imaging speed. Moreover, PAT has inherent sensitivity to tissue's functional, molecular, and metabolic state. With these merits, PAT has been applied in a wide range of life science disciplines, and has enabled biomedical research unattainable by other imaging methods. This Review article aims at introducing state-of-the-art PAT technologies and their representative applications. The focus is on recent technological breakthroughs in structural, functional, molecular PAT, including super-resolution imaging, real-time small-animal whole-body imaging, and high-sensitivity functional/molecular imaging. We also discuss the remaining challenges in PAT and envisioned opportunities.
This is a PDF file of an article that has undergone enhancements after acceptance, such as the addition of a cover page and metadata, and formatting for readability, but it is not yet the definitive version of record. This version will undergo additional copyediting, typesetting and review before it is published in its final form, but we are providing this version to give early visibility of the article. Please note that, during the production process, errors may be discovered which could affect the content, and all legal disclaimers that apply to the journal pertain.
Photoacoustic microscopy (PAM) is an emerging imaging method combining light and sound. However, limited by the laser’s repetition rate, state-of-the-art high-speed PAM technology often sacrifices spatial sampling density ( i.e. , undersampling) for increased imaging speed over a large field-of-view. Deep learning (DL) methods have recently been used to improve sparsely sampled PAM images; however, these methods often require time-consuming pre-training and large training dataset with ground truth. Here, we propose the use of deep image prior (DIP) to improve the image quality of undersampled PAM images. Unlike other DL approaches, DIP requires neither pre-training nor fully-sampled ground truth, enabling its flexible and fast implementation on various imaging targets. Our results have demonstrated substantial improvement in PAM images with as few as 1.4 % of the fully sampled pixels on high-speed PAM. Our approach outperforms interpolation, is competitive with pre-trained supervised DL method, and is readily translated to other high-speed, undersampling imaging modalities.
Optical-resolution photoacoustic microscopy (OR-PAM) can provide functional, anatomical, and molecular images at micrometer level resolution with an imaging depth of less than 1 mm in tissue. However, the imaging speed of traditional OR-PAM is often low due to the point-by-point mechanical scanning and cannot capture time-sensitive dynamic information. In this work, we demonstrate a recent effort in improving the imaging speed of OR-PAM, using a newly developed water-immersible two-axis scanner. Driven by water-compatible electromagnetic actuation force, the new scanning mirror employs a novel torsion-bending mechanism to achieve fast 2D scanning. The torsion scanning along the fast-axis works in the resonant model, and the bending scanning along the slow-axis operate at the quasi-static mode. The scanning speed and scanning range along the two axes can be independently adjusted. Steered by the two-axis torsion-bending scanning mirror immersed in water, the focused excitation light and the generated acoustic wave can be confocally aligned over the entire imaging area. Thus, a high imaging speed can be achieved without sacrificing the detection sensitivity. Equipped with the torsion-bending scanner, the high-speed OR-PAM system has achieved a cross-sectional frame rate of 400 Hz, and a volumetric imaging speed of 1 Hz over a field of view of 1.5 × 2.5 mm 2 . We have also demonstrated high-speed OR-PAM of the hemodynamic changes in response to pharmaceutical and physiological challenges in small animal models in vivo. We expect the torsion-bending scanner based OR-PAM will find matched biomedical studies of tissue dynamics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.