Light microscopy combined with well-established protocols of two-dimensional cell culture facilitates high-throughput quantitative imaging to study biological phenomena. Accurate segmentation of individual cells in images enables exploration of complex biological questions, but can require sophisticated imaging processing pipelines in cases of low contrast and high object density. Deep learning-based methods are considered state-of-the-art for image segmentation but typically require vast amounts of annotated data, for which there is no suitable resource available in the field of label-free cellular imaging. Here, we present LIVECell, a large, high-quality, manually annotated and expert-validated dataset of phase-contrast images, consisting of over 1.6 million cells from a diverse set of cell morphologies and culture densities. To further demonstrate its use, we train convolutional neural network-based models using LIVECell and evaluate model segmentation accuracy with a proposed a suite of benchmarks.
Fluorescence microscopy is a core method for visualizing and quantifying the spatial and temporal dynamics of complex biological processes. While many fluorescent microscopy techniques exist, due to its cost-effectiveness and accessibility, widefield fluorescent imaging remains one of the most widely used. To accomplish imaging of 3D samples, conventional widefield fluorescence imaging entails acquiring a sequence of 2D images spaced along the z-dimension, typically called a z-stack. Oftentimes, the first step in an analysis pipeline is to project that 3D volume into a single 2D image because 3D image data can be cumbersome to manage and challenging to analyze and interpret. Furthermore, z-stack acquisition is often time-consuming, which consequently may induce photodamage to the biological sample; these are major barriers for workflows that require high-throughput, such as drug screening. As an alternative to z-stacks, axial sweep acquisition schemes have been proposed to circumvent these drawbacks and offer potential of 100-fold faster image acquisition for 3D-samples compared to z-stack acquisition. Unfortunately, these acquisition techniques generate low-quality 2D z-projected images that require restoration with unwieldy, computationally heavy algorithms before the images can be interrogated. We propose a novel workflow to combine axial z-sweep acquisition with deep learning-based image restoration, ultimately enabling high-throughput and high-quality imaging of complex 3D-samples using 2D projection images. To demonstrate the capabilities of our proposed workflow, we apply it to live-cell imaging of large 3D tumor spheroid cultures and find we can produce high-fidelity images appropriate for quantitative analysis. Therefore, we conclude that combining axial z-sweep image acquisition with deep learning-based image restoration enables high-throughput and high-quality fluorescence imaging of complex 3D biological samples.
Machine vision is a powerful technology that has become increasingly popular and accurate during the last decade due to rapid advances in the field of machine learning. The majority of machine vision applications are currently found in consumer electronics, automotive applications, and quality control, yet the potential for bioprocessing applications is tremendous. For instance, detecting and controlling foam emergence is important for all upstream bioprocesses, but the lack of robust foam sensing often leads to batch failures from foam-outs or overaddition of antifoam agents. Here, we report a new low-cost, flexible, and reliable foam sensor concept for bioreactor applications. The concept applies convolutional neural networks (CNNs), a state-of-the-art machine learning system for image processing. The implemented method shows high accuracy for both binary foam detection (foam/no foam) and fine-grained classification of foam levels.
Cell morphology is incredibly diverse and provides valuable insight into cellular dynamics, including cell health and differentiation. For ease of analysis, morphology studies often focus on quantifying one or two metrics, e.g. circularity or area. However, this may lead to incorrect conclusions as information about cell size, shape, brightness and texture all capture different nuances of morphology. By using multivariate data analysis (MVDA), multiple properties can be combined into a single metric that simultaneously describes the many different aspects of cell morphology. Supervised machine learning tools further enable identifying subpopulations of cells by their morphology alone. Here we describe a workflow for label-free classification of heterogeneous cells using phase contrast images. Classification of live and dead cells across range of cancer cell types was evaluated. Cells were seeded into 96-well plates and maintained in a physiologically relevant environment to ensure morphology was unperturbed. After 24h, cells were treated with compounds exerting cytotoxic effects via a range of mechanisms. All plates contained camptothecin (CMP, 10 µM) as a control for cell death and were in the presence of a fluorescent cell health reagent (Incucyte® Annexin V) to verify cell death. Images were acquired using an Incucyte® Live-Cell Analysis System (10x objective, every 2h for 3 days) and were segmented using the integrated Incucyte® Cell-by-Cell Analysis Software Module. For validation, cells were also classified based on fluorescence (Annexin V positive) to yield a dead cell percentage. An MVDA regression model was trained for each cell type using only the label-free morphology metrics extracted from the segmented phase contrast images of live (untreated cells, range of confluence values) and dead (10 µM CMP, 72h only) cells. This model was subsequently applied to all acquired images to classify every cell as live or dead. Time- and concentration-dependent increases in the fraction of dead cells closely matched that of the fluorescence classification for all tested conditions. For example, A549 cells treated with CMP produced EC50 values of 0.53 µM (label-free) and 0.66 µM (fluorescence). The analysis proved robust across multiple cell types and compounds, even in cases where morphological change occurred unrelated to cell death. In conclusion, our data demonstrates the utility of an MVDA approach for measuring cell morphology change using label-free live/dead classification as validation. Similar classifications may be applied to alternative biological paradigms which undergo morphological change, such as cell differentiation. Additionally, as the use of morphology metrics for classification requires accurate delineation of cells, improved cell segmentation tools utilizing convolutional neural network models may further enable application of these methods to more challenging cell types. Citation Format: Gillian F. Lovell, Daniel A. Porto, Timothy R. Jackson, Jasmine Trigg, Nicola Bevan, Christoffer Edlund, Rickard Sjöegren, Nevine Holtz, Daniel M. Appledorn, Timothy Dale. Classification of cell morphology using machine learning and label-free live-cell imaging [abstract]. In: Proceedings of the American Association for Cancer Research Annual Meeting 2021; 2021 Apr 10-15 and May 17-21. Philadelphia (PA): AACR; Cancer Res 2021;81(13_Suppl):Abstract nr 1305.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.