A common goal of fluorescence microscopy is to collect data on specific biological events. Yet, the event-specific content that can be collected from a sample is limited, especially for rare or stochastic processes. This is due in part to photobleaching and phototoxicity, which constrain imaging speed and duration. We developed an event-driven acquisition (EDA) framework, in which neural networkbased recognition of specific biological events triggers real-time control in an instant structured illumination microscope (iSIM). Our setup adapts acquisitions on-the-fly by switching between a slow imaging rate while detecting the onset of events, and a fast imaging rate during their progression. Thus, we capture mitochondrial and bacterial divisions at imaging rates that match their dynamic timescales, while extending overall imaging durations. Because EDA allows the microscope to respond specifically to complex biological events, it acquires data enriched in relevant content.
words)The processing of microscopy images constitutes a bottleneck for large-scale experiments. A critical step is the establishment of cell borders ('segmentation'), which is required for a range of applications such as growth or fluorescent reporter measurements. For the model organism budding yeast (Saccharomyces cerevisiae), a number of methods for segmentation exist. However, in experiments involving multiple cell cycles, stress, or various mutants, cells crowd or exhibit irregular visible features, which necessitate frequent manual corrections. Furthermore, budding events are visually subtle but important to detect. Convolutional neural networks (CNNs) have been successfully employed for a range of image processing applications. They require large, diverse training sets. Here, we present i) the first set of publicly available, high-quality segmented yeast images (>10'000 cells) including mutants, stressed cells, and time courses, ii) a corresponding U-Net-based CNN, iii) a Python-based graphical user interface (GUI) to efficiently use the system, and iv) a web application to test it (www.quantsysbio.com). A key feature is a cell-cell boundary test which avoids the need for additional input from fluorescent channels. A bipartite graph matching algorithm tracks cells in time with high reliability. Our network is highly accurate and outperforms existing methods on benchmark images recorded by others, suggesting it transfers well to other conditions. Furthermore, new buds are detected early with high reliability. We apply the system to detect differences in geometry between wild-type and cyclin mutant cells. Our results indicate that morphogenesis control occurs unexpectedly early in the cell cycle and is gradual, demonstrating how the efficient processing of large numbers of cells uncovers new biology. Our system can serve as a resource to the community, expanded continuously with new images. Furthermore, the techniques we develop here are likely to be useful for other organisms as well.2 Dietler et al.YeaZ: A CNN for highly accurate, label-free segmentation of yeast images Abstract (150 words)The identification of cell borders ('segmentation') in microscopy images constitutes a bottleneck for large-scale experiments. For the model organism Saccharomyces cerevisiae, current segmentation methods face challenges when cells bud, crowd, or exhibit irregular features. Here, we present i) the first set of publicly available, high-quality segmented yeast images (>10'000 cells) including mutants, stressed cells, and time courses, ii) a corresponding convolutional neural network (CNN), iii) a graphical user interface and a web application (www.quantsysbio.com) to efficiently employ, test, and expand the system. A key feature is a cell-cell boundary test which avoids the need for fluorescent markers. Our CNN is highly accurate, including for buds, and outperforms existing methods on benchmark images, indicating it transfers well to other conditions. To demonstrate how efficient, large-scale image processing uncove...
In fluorescence microscopy, the amount of information that can be collected from the sample is limited, often due to constraints imposed by photobleaching and phototoxicity. Here, we report an event-driven acquisition (EDA) framework, which combines real-time, neural network-based recognition of events of interest with automated control of the imaging parameters in an instant structured illumination microscope (iSIM). On-the-fly prioritization of imaging rate or experiment duration is achieved by switching between a slow imaging rate to detect the onset of biological events of interest and a fast imaging rate to enable high information content during their progression. In this way, EDA allows the data capture of mitochondrial and bacterial divisions at imaging rates that match their dynamic timescales, while extending the accessible imaging duration, and thereby increases the density of relevant information in the acquired data.
Background High-throughput and selective detection of organelles in immunofluorescence images is an important but demanding task in cell biology. The centriole organelle is critical for fundamental cellular processes, and its accurate detection is key for analysing centriole function in health and disease. Centriole detection in human tissue culture cells has been achieved typically by manual determination of organelle number per cell. However, manual cell scoring of centrioles has a low throughput and is not reproducible. Published semi-automated methods tally the centrosome surrounding centrioles and not centrioles themselves. Furthermore, such methods rely on hard-coded parameters or require a multichannel input for cross-correlation. Therefore, there is a need for developing an efficient and versatile pipeline for the automatic detection of centrioles in single channel immunofluorescence datasets. Results We developed a deep-learning pipeline termed CenFind that automatically scores cells for centriole numbers in immunofluorescence images of human cells. CenFind relies on the multi-scale convolution neural network SpotNet, which allows the accurate detection of sparse and minute foci in high resolution images. We built a dataset using different experimental settings and used it to train the model and evaluate existing detection methods. The resulting average F1-score achieved by CenFind is > 90% across the test set, demonstrating the robustness of the pipeline. Moreover, using the StarDist-based nucleus detector, we link the centrioles and procentrioles detected with CenFind to the cell containing them, overall enabling automatic scoring of centriole numbers per cell. Conclusions Efficient, accurate, channel-intrinsic and reproducible detection of centrioles is an important unmet need in the field. Existing methods are either not discriminative enough or focus on a fixed multi-channel input. To fill this methodological gap, we developed CenFind, a command line interface pipeline that automates cell scoring of centrioles, thereby enabling channel-intrinsic, accurate and reproducible detection across experimental modalities. Moreover, the modular nature of CenFind enables its integration in other pipelines. Overall, we anticipate CenFind to prove critical for accelerating discoveries in the field.
Bone marrow (BM) cellularity assessment is a crucial step in the evaluation of BM trephine biopsies for hematological and non-hematological disorders. Clinical assessment is based on a labor-intensive, semi-quantitative visual estimation of the hematopoietic and adipocytic components by hematopathologists, which does not provide quantitative information on other stromal compartments. In this study, we developed and validated MarrowQuant 2.0, an efficient user-friendly digital hematopathology workflow integrated within QuPath software which serves as BM quantifier for five mutually-exclusive compartments (bone, hematopoietic, adipocytic, interstitial/microvasculature areas, and “Other”) to derive cellularity of human BM trephine biopsies. Instance segmentation of individual adipocytes is realized via adaptation of the machine-learning-based algorithm StarDist. We calculated BM compartments and adipocyte size distributions of haematoxylin and eosin (H&E) images obtained from a total of 250 bone specimens, from control and acute myeloid leukemia or myelodysplastic patients at diagnosis or follow-up, then measured the agreement of cellularity estimates by MarrowQuant 2.0 against visual scores from four hematopathologists. The algorithm was capable of robust BM compartment segmentation with average mask accuracy of 86%, maximal for bone (99%), hematopoietic (92%) and adipocyte (98%) areas. MarrowQuant 2.0 cellularity score and hematopathologist estimations were highly correlated (R2 = 0.92–0.98, Intraclass-Correlation-Coefficient ICC = 0.98; inter-observer ICC 0.96). BM compartment segmentation quantitatively confirmed reciprocity of the hematopoietic and adipocytic compartments. MarrowQuant 2.0 performance was additionally tested for cellularity assessment of specimens prospectively collected from clinical routine diagnosis. After special consideration for the choice of the cellularity equation in specimens with expanded stroma, performance was similar in this setting (R2 = 0.86, n = 42). We thus conclude that MarrowQuant 2.0 can be applied in a clinical setting. We expect this workflow will contribute to improving the speed and ease of diagnosis in hematopathology and serve as a clinical research tool to explore novel biomarkers related to BM stromal components.
Three-dimensional electron-microscopy is an important imaging modality in contemporary cell biology. Identification of intracellular structures is laborious and time-consuming, however, and seriously impairs effective use of a potentially powerful tool. Resolving this bottleneck is therefore a critical next step in frontier biomedical imaging. We describe Automated Segmentation of intracellular substructures in Electron Microscopy (ASEM), a new pipeline to train a convolutional network to detect structures of wide range in size and complexity. We obtain for each structure a dedicated model based on a small number of sparsely annotated ground truth annotations from only one or two cells. To improve model generalization to different imaging conditions, we developed a rapid, computationally effective strategy to refine an already trained model by including a few additional annotations. We show the successful automated identification of mitochondria, Golgi apparatus, endoplasmic reticulum, nuclear pore complexes, clathrin coated pits and coated vesicles, and caveolae in cells imaged by focused ion beam scanning electron microscopy with quasi-isotropic resolution.
No abstract
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.