Highlights d Robust method automatically adapting to various unseen experimental scenarios d Deep learning solution for accurate nucleus segmentation without user interaction d Accelerates, improves quality, and reduces complexity of bioimage analysis tasks
Quantifying heterogeneities within cell populations is important for many fields including cancer research and neurobiology; however, techniques to isolate individual cells are limited. Here, we describe a high-throughput, non-disruptive, and cost-effective isolation method that is capable of capturing individually targeted cells using widely available techniques. Using high-resolution microscopy, laser microcapture microscopy, image analysis, and machine learning, our technology enables scalable molecular genetic analysis of single cells, targetable by morphology or location within the sample.
High-content, imaging-based screens now routinely generate data on a scale that precludes manual verification and interrogation. Software applying machine learning has become an essential tool to automate analysis, but these methods require annotated examples to learn from. Efficiently exploring large datasets to find relevant examples remains a challenging bottleneck. Here, we present Advanced Cell Classifier (ACC), a graphical software package for phenotypic analysis that addresses these difficulties. ACC applies machine-learning and image-analysis methods to high-content data generated by large-scale, cell-based experiments. It features methods to mine microscopic image data, discover new phenotypes, and improve recognition performance. We demonstrate that these features substantially expedite the training process, successfully uncover rare phenotypes, and improve the accuracy of the analysis. ACC is extensively documented, designed to be user-friendly for researchers without machine-learning expertise, and distributed as a free open-source tool at www.cellclassifier.org.
Phenotypic image analysis is the task of recognizing variations in cell properties using microscopic image data. These variations, produced through a complex web of interactions between genes and the environment, may hold the key to uncover important biological phenomena or to understand the response to a drug candidate. Today, phenotypic analysis is rarely performed completely by hand. The abundance of high-dimensional image data produced by modern high-throughput microscopes necessitates computational solutions. Over the past decade, a number of software tools have been developed to address this need. They use statistical learning methods to infer relationships between a cell's phenotype and data from the image. In this review, we examine the strengths and weaknesses of non-commercial phenotypic image analysis software, cover recent developments in the field, identify challenges, and give a perspective on future possibilities.
Proteins are necessary for cellular growth. Concurrently, however, protein production has high energetic demands associated with transcription and translation. Here, we propose that activity of molecular chaperones shape protein burden, that is the fitness costs associated with expression of unneeded proteins. To test this hypothesis, we performed a genome-wide genetic interaction screen in baker's yeast. Impairment of transcription, translation, and protein folding rendered cells hypersensitive to protein burden. Specifically, deletion of specific regulators of the Hsp70-associated chaperone network increased protein burden. In agreement with expectation, temperature stress, increased mistranslation and a chemical misfolding agent all substantially enhanced protein burden. Finally, unneeded protein perturbed interactions between key components of the Hsp70-Hsp90 network involved in folding of native proteins. We conclude that specific chaperones contribute to protein burden. Our work indicates that by minimizing the damaging impact of gratuitous protein overproduction, chaperones enable tolerance to massive changes in genomic expression.
Astrocytes are involved in various brain pathologies including trauma, stroke, neurodegenerative disorders such as Alzheimer’s and Parkinson’s diseases, or chronic pain. Determining cell density in a complex tissue environment in microscopy images and elucidating the temporal characteristics of morphological and biochemical changes is essential to understand the role of astrocytes in physiological and pathological conditions. Nowadays, manual stereological cell counting or semi-automatic segmentation techniques are widely used for the quantitative analysis of microscopy images. Detecting astrocytes automatically is a highly challenging computational task, for which we currently lack efficient image analysis tools. We have developed a fast and fully automated software that assesses the number of astrocytes using Deep Convolutional Neural Networks (DCNN). The method highly outperforms state-of-the-art image analysis and machine learning methods and provides precision comparable to those of human experts. Additionally, the runtime of cell detection is significantly less than that of other three computational methods analysed, and it is faster than human observers by orders of magnitude. We applied our DCNN-based method to examine the number of astrocytes in different brain regions of rats with opioid-induced hyperalgesia/tolerance (OIH/OIT), as morphine tolerance is believed to activate glia. We have demonstrated a strong positive correlation between manual and DCNN-based quantification of astrocytes in rat brain.
Single cell segmentation is typically one of the first and most crucial tasks of image-based cellular analysis. We present a deep learning approach aiming towards a truly general method for localizing nuclei across a diverse range of assays and light microscopy modalities. We outperform the 739 methods submitted to the 2018 Data Science Bowl on images representing a variety of realistic conditions, some of which were not represented in the training data. The key to our approach is to adapt our model to unseen and unlabeled data using image style transfer to generate augmented training samples. This allows the model to recognize nuclei in new and different experiments without requiring expert annotations.Identifying nuclei is the starting point for many microscopy-based cellular analyses. Accurate localization of the nucleus is the basis of a variety of quantitative measurements, but is also a first step for identifying individual cell borders, which enables a multitude of further analyses. Until recently, the dominant approaches for this task have been based on classic image processing algorithms (e.g. CellProfiler 1 ) which were sometimes guided by shape and spatial priors 2 . A drawback of these methods is the need for expert knowledge to properly adjust the parameters, which typically must be re-tuned when experimental conditions change.Recently, deep learning has revolutionized an assortment of tasks in image analysis, from image classification 3 to face recognition 4 , and scene segmentation 5 . It is also responsible for breakthroughs in diagnosing retinal images 6 , classifying skin lesions with superhuman performance 7 , as well as incredible advances in 3D fluorescence image analysis 8 . However, aside from initial works from Caicedo et al. 9 and Van Valen et al. 10 , deep learning has yet to significantly advance nucleus segmentation performance.
To answer major questions of cell biology, it is often essential to understand the complex phenotypic composition of cellular systems precisely. Modern automated microscopes produce vast amounts of images routinely, making manual analysis nearly impossible. Due to their efficiency, machine learning-based analysis software have become essential tools to perform single-cell-level phenotypic analysis of large imaging datasets. However, an important limitation of such methods is that they do not use the information gained from the cellular micro- and macroenvironment: the algorithmic decision is based solely on the local properties of the cell of interest. Here, we present how various features from the surrounding environment contribute to identifying a cell and how such additional information can improve single-cell-level phenotypic image analysis. The proposed methodology was tested for different sizes of Euclidean and nearest neighbour-based cellular environments both on tissue sections and cell cultures. Our experimental data verify that the surrounding area of a cell largely determines its entity. This effect was found to be especially strong for established tissues, while it was somewhat weaker in the case of cell cultures. Our analysis shows that combining local cellular features with the properties of the cell’s neighbourhood significantly improves the accuracy of machine learning-based phenotyping.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.