Artificial intelligence, deep convolutional neural networks, and deep learning are all niche terms that are increasingly appearing in scientific presentations as well as in the general media. In this review, we focus on deep learning and how it is applied to microscopy image data of cells and tissue samples. Starting with an analogy to neuroscience, we aim to give the reader an overview of the key concepts of neural networks, and an understanding of how deep learning differs from more classical approaches for extracting information from image data. We aim to increase the understanding of these methods, while highlighting considerations regarding input data requirements, computational resources, challenges, and limitations. We do not provide a full manual for applying these methods to your own data, but rather review previously published articles on deep learning in image cytometry, and guide the readers toward further reading on specific networks and methods, including new methods not yet applied to cytometry data. © 2018 The Authors. Cytometry Part A published by Wiley Periodicals, Inc. on behalf of International Society for Advancement of Cytometry.
With the increasing amount of image data collected from biomedical experiments there is an urgent need for smarter and more effective analysis methods. Many scientific questions require analysis of image subregions related to some specific biology. Finding such regions of interest (ROIs) at low resolution and limiting the data subjected to final quantification at full resolution can reduce computational requirements and save time. In this paper we propose a three-step pipeline: First, bounding boxes for ROIs are located at low resolution. Next, ROIs are subjected to semantic segmentation into sub-regions at mid-resolution. We also estimate the confidence of the segmented sub-regions. Finally, quantitative measurements are extracted at full resolution. We use deep learning for the first two steps in the pipeline and conformal prediction for confidence assessment. We show that limiting final quantitative analysis to sub-regions with full confidence reduces noise and increases separability of observed biological effects.
Background: Early prediction of time-lapse microscopy experiments enables intelligent data management and decision-making. Aim: Using time-lapse data of HepG2 cells exposed to lipid nanoparticles loaded with mRNA for expression of GFP, the authors hypothesized that it is possible to predict in advance whether a cell will express GFP. Methods: The first modeling approach used a convolutional neural network extracting per-cell features at early time points. These features were then combined and explored using either a long short-term memory network (approach 2) or time series feature extraction and gradient boosting machines (approach 3). Results: Accounting for the temporal dynamics significantly improved performance. Conclusion: The results highlight the benefit of accounting for temporal dynamics when studying drug delivery using high-content imaging.
Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as those based on nanomedicines. Unfortunately, fluorescence microscopy is time-consuming, expensive, labour intensive, and toxic to the cells. Bright-field images lack these downsides but also lack the clear contrast of the cellular components and hence are difficult to use for downstream analysis. Generating the fluorescence images directly from bright-field images using virtual staining (also known as “label-free prediction” and “in-silico labeling”) can get the best of both worlds, but can be very challenging to do for poorly visible cellular structures in the bright-field images. To tackle this problem deep learning models were explored to learn the mapping between bright-field and fluorescence images for adipocyte cell images. The models were tailored for each imaging channel, paying particular attention to the various challenges in each case, and those with the highest fidelity in extracted cell-level features were selected. The solutions included utilizing privileged information for the nuclear channel, and using image gradient information and adversarial training for the lipids channel. The former resulted in better morphological and count features and the latter resulted in more faithfully captured defects in the lipids, which are key features required for downstream analysis of these channels.
Fluorescence staining techniques, such as Cell Painting, together with fluorescence microscopy have proven invaluable for visualizing and quantifying the effects that drugs and other perturbations have on cultured cells. However, fluorescence microscopy is expensive, time-consuming, labor-intensive, and the stains applied can be cytotoxic, interfering with the activity under study. The simplest form of microscopy, brightfield microscopy, lacks these downsides, but the images produced have low contrast and the cellular compartments are difficult to discern. Nevertheless, by harnessing deep learning, these brightfield images may still be sufficient for various predictive purposes. In this study, we compared the predictive performance of models trained on fluorescence images to those trained on brightfield images for predicting the mechanism of action (MoA) of different drugs. We also extracted CellProfiler features from the fluorescence images and used them to benchmark the performance. Overall, we found comparable and largely correlated predictive performance for the two imaging modalities. This is promising for future studies of MoAs in time-lapse experiments for which using fluorescence images is problematic. Explorations based on explainable AI techniques also provided valuable insights regarding compounds that were better predicted by one modality over the other.
Fluorescence microscopy, which visualizes cellular components with fluorescent stains, is an invaluable method in image cytometry. From these images various cellular features can be extracted. Together these features form phenotypes that can be used to determine effective drug therapies, such as those based on nanomedicines. Unfortunately, fluorescence microscopy is time-consuming, expensive, labour intensive, and toxic to the cells. Bright-field images lack these downsides but also lack the clear contrast of the cellular components and hence are difficult to use for downstream analysis. Generating the fluorescence images directly from bright-field images would get the best of both worlds, but can be very challenging to do for poorly visible cellular structures in the bright-field images. To tackle this problem deep learning models were explored to learn the mapping between bright-field and fluorescence images to enable virtual staining for adipocyte cell images. The models were tailored for each imaging channel, paying particular attention to the various challenges in each case, and those with the highest fidelity in extracted cell-level features were selected. The solutions included utilizing privileged information for the nuclear channel, and using image gradient information and adversarial training for the lipids channel. The former resulted in better morphological and count features and the latter resulted in more faithfully captured defects in the lipids, which are key features required for downstream analysis of these channels.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.