Label-free cell analysis is essential to personalized genomics, cancer diagnostics, and drug development as it avoids adverse effects of staining reagents on cellular viability and cell signaling. However, currently available label-free cell assays mostly rely only on a single feature and lack sufficient differentiation. Also, the sample size analyzed by these assays is limited due to their low throughput. Here, we integrate feature extraction and deep learning with high-throughput quantitative imaging enabled by photonic time stretch, achieving record high accuracy in label-free cell classification. Our system captures quantitative optical phase and intensity images and extracts multiple biophysical features of individual cells. These biophysical measurements form a hyperdimensional feature space in which supervised learning is performed for cell classification. We compare various learning algorithms including artificial neural network, support vector machine, logistic regression, and a novel deep learning pipeline, which adopts global optimization of receiver operating characteristics. As a validation of the enhanced sensitivity and specificity of our system, we show classification of white blood T-cells against colon cancer cells, as well as lipid accumulating algal strains for biofuel production. This system opens up a new path to data-driven phenotypic diagnosis and better understanding of the heterogeneous gene expressions in cells.
Deep learning has achieved spectacular performance in image and speech recognition and synthesis. It outperforms other machine learning algorithms in problems where large amounts of data are available. In the area of measurement technology, instruments based on the photonic time stretch have established record real-time measurement throughput in spectroscopy, optical coherence tomography, and imaging flow cytometry. These extreme-throughput instruments generate approximately 1 Tbit/s of continuous measurement data and have led to the discovery of rare phenomena in nonlinear and complex systems as well as new types of biomedical instruments. Owing to the abundance of data they generate, time-stretch instruments are a natural fit to deep learning classification. Previously we had shown that high-throughput label-free cell classification with high accuracy can be achieved through a combination of time-stretch microscopy, image processing and feature extraction, followed by deep learning for finding cancer cells in the blood. Such a technology holds promise for early detection of primary cancer or metastasis. Here we describe a new deep learning pipeline, which entirely avoids the slow and computationally costly signal processing and feature extraction steps by a convolutional neural network that directly operates on the measured signals. The improvement in computational efficiency enables low-latency inference and makes this pipeline suitable for cell sorting via deep learning. Our neural network takes less than a few milliseconds to classify the cells, fast enough to provide a decision to a cell sorter for real-time separation of individual target cells. We demonstrate the applicability of our new method in the classification of OT-II white blood cells and SW-480 epithelial cancer cells with more than 95% accuracy in a label-free fashion.
Time stretch imaging offers real-time image acquisition at millions of frames per second and subnanosecond shutter speed, and has enabled detection of rare cancer cells in blood with record throughput and specificity. An unintended consequence of high throughput image acquisition is the massive amount of digital data generated by the instrument. Here we report the first experimental demonstration of real-time optical image compression applied to time stretch imaging. By exploiting the sparsity of the image, we reduce the number of samples and the amount of data generated by the time stretch camera in our proof-of-concept experiments by about three times. Optical data compression addresses the big data predicament in such systems.
Time stretch dispersive Fourier transform enables real-time spectroscopy at the repetition rate of million scans per second. High-speed real-time instruments ranging from analog-to-digital converters to cameras and single-shot rare-phenomena capture equipment with record performance have been empowered by it. Its warped stretch variant, realized with nonlinear group delay dispersion, offers variable-rate spectral domain sampling, as well as the ability to engineer the time-bandwidth product of the signal’s envelope to match that of the data acquisition systems. To be able to reconstruct the signal with low loss, the spectrotemporal distribution of the signal spectrum needs to be sparse. Here, for the first time, we show how to design the kernel of the transform and specifically, the nonlinear group delay profile dictated by the signal sparsity. Such a kernel leads to smart stretching with nonuniform spectral resolution, having direct utility in improvement of data acquisition rate, real-time data compression, and enhancement of ultrafast data capture accuracy. We also discuss the application of warped stretch transform in spectrotemporal analysis of continuous-time signals.
Time stretch dispersive Fourier transform enables real-time spectroscopy at the repetition rate of million scans per second. High-speed real-time instruments ranging from analog-to-digital converters to cameras and single-shot rare-phenomena capture equipment with record performance have been empowered by it. Its warped stretch variant, realized with nonlinear group delay dispersion, offers variable-rate spectral domain sampling, as well as the ability to engineer the time-bandwidth product of the signal's envelope to match that of the data acquisition systems. To be able to reconstruct the signal with low loss, the spectrotemporal distribution of the signal spectrum needs to be sparse. Here, for the first time, we show how to design the kernel of the transform and specifically, the nonlinear group delay profile dictated by the signal sparsity. Such a kernel leads to smart stretching with nonuniform spectral resolution, having direct utility in improvement of data acquisition rate, real-time data compression, and enhancement of ultrafast data capture accuracy. We also discuss the application of warped stretch transform in spectrotemporal analysis of continuous-time signals.Time stretch dispersive Fourier transform 1-3 addresses the analog-to-digital converter (ADC) bottleneck in real-time acquisition of ultrafast signals. It leads to fast real-time spectral measurements of wideband signals by mapping the signal into a waveform that is slow enough to be digitized in real-time. Combined with temporal or spatial encoding, time stretch dispersive Fourier transform has been used to create instruments that capture extremely fast optical phenomena at high throughput. By doing so, it has led to the discovery of optical rogue waves 4 , the creation of a new imaging modality known as the time stretch camera 5 , which has enabled detection of cancer cells in blood with record sensitivity [6][7][8] , a portfolio of other fast real-time instruments such as an ultrafast vibrometer 9,10 , and world record performance in analog-to-digital conversion 11,12 . The key feature that enables fast real-time measurements is not the Fourier transform, but rather the time stretch. For example, direct frequency-to-time mapping can be replaced by phase retrieval 13 or coherent detection after the dispersion 14 followed by back propagation. Using warped group delay dispersion as a photonic hardware accelerator 15 , an optical signal's intensity envelope can be engineered to match the specifications of the data acquisition back-end [16][17][18] . One can slow down an ultra-fast burst of data, and at the same time, achieve data compression by exploiting sparsity in the original data 19 . Also called anamorphic stretch transform 16,17 , the warped stretch transform performs a nonuniform frequency-to-time mapping followed by a uniform sampler. The combined effect of the transform is that the signal's Fourier spectrum is sampled at a nonuniform rate and resolution. By designing the group delay profile according to the sparsity in the spectrum of...
Time stretch imaging offers real-time image acquisition at millions of frames per second and subnanosecond shutter speed, and has enabled detection of rare cancer cells in blood with record throughput and specificity. An unintended consequence of high throughput image acquisition is the massive amount of digital data generated by the instrument. Here we report the first experimental demonstration of real-time optical image compression applied to time stretch imaging. By exploiting the sparsity of the image, we reduce the number of samples and the amount of data generated by the time stretch camera in our proof-of-concept experiments by about three times. Optical data compression addresses the big data predicament in such systems.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.