Pathogenesis induced by SARS-CoV-2 is thought to result from both an inflammation dominated cytokine response and virus-induced cell perturbation causing cell death. Here, we employ an integrative imaging analysis to determine morphological organelle alterations induced in SARS-CoV-2 infected human lung epithelial cells. We report 3D electron microscopy reconstructions of whole-cells and subcellular compartments, revealing extensive fragmentation of the Golgi apparatus, alteration of the mitochondrial network and recruitment of peroxisomes to viral replication organelles formed by clusters of double-membrane vesicles (DMVs). These are tethered to the endoplasmic reticulum, providing insights into DMV biogenesis and spatial coordination of SARS-CoV-2 replication. Live cell imaging combined with an infection sensor reveals profound remodeling of cytoskeleton elements. Pharmacological inhibition of their dynamics suppresses SARS-CoV-2 replication. We thus report insights into virus-induced cytopathic effects, and provide alongside a comprehensive publicly available repository of 3D data-sets of SARS-CoV-2 infected cells for download and smooth online visualization.
Alignment of stacks of serial images generated by focused ion Beam Scanning electron Microscopy (FIB-SEM) is generally performed using translations only, either through slice-by-slice alignments with SIFT or alignment by template matching. However, limitations of these methods are twofold: the introduction of a bias along the dataset in the z-direction which seriously alters the morphology of observed organelles and a missing compensation for pixel size variations inherent to the image acquisition itself. These pixel size variations result in local misalignments and jumps of a few nanometers in the image data that can compromise downstream image analysis. We introduce a novel approach which enables affine transformations to overcome local misalignments while avoiding the danger of introducing a scaling, rotation or shearing trend along the dataset. Our method first computes a template dataset with an alignment method restricted to translations only. This pre-aligned dataset is then smoothed selectively along the z-axis with a median filter, creating a template to which the raw data is aligned using affine transformations. Our method was applied to FIB-SEM datasets and showed clear improvement of the alignment along the z-axis resulting in a significantly more accurate automatic boundary segmentation using a convolutional neural network. In recent years, the development of new electron microscopy technologies allowed the automated serial imaging of entire biological specimens, from cells to model organisms. Amongst these techniques, Focused Ion Beam Scanning Electron Microscopy (FIB-SEM) has emerged as a preferred technology for gathering serial images at isotropic resolution. After volumetric acquisition, an important step for proper visualization and accurate morphometric analysis is the alignment of the image stack along the z-axis. However, due to the size and complexity of the data, alignments using simple translations are most commonly used 1,2 instead of adapting the transformations to the specific type of data. Consequently, the most common algorithms used to find correlation between adjacent slices are alignment by SIFT 3 and alignment using a template structure matched by cross correlation, also known as template matching (TM). In the specific case of data acquired using the Atlas 5 software 4 , TM is efficiently performed on markings created at the surface of the sample (Fig. 1a). These markings are at a constant position with respect to the flat sample surface. In case of SIFT when only global translations are applied, each slice is aligned to the previous, preserving local morphological properties (a few slices) along the z-axis, while disturbing the global shape of objects across long distances. As a consequence, straight objects, such as the sample surface plane, can become crooked in a non-predictable manner (Fig. 2a). Additionally, for FIB-SEM data, slices can project distorted images when different parts of an imaged cross-section are exposed to different rates of radiation. This effect typically...
In the last years, automated segmentation has become a necessary tool for volume electron microscopy (EM) imaging. So far, the best performing techniques have been largely based on fully supervised encoder-decoder CNNs, requiring a substantial amount of annotated images. Domain Adaptation (DA) aims to alleviate the annotation burden by 'adapting' the networks trained on existing groundtruth data (source domain) to work on a different (target) domain with as little additional annotation as possible. Most DA research is focused on the classification task, whereas volume EM segmentation remains rather unexplored. In this work, we extend recently proposed classification DA techniques to an encoder-decoder layout and propose a novel method that adds a reconstruction decoder to the classical encoder-decoder segmentation in order to align source and target encoder features. The method has been validated on the task of segmenting mitochondria in EM volumes. We have performed DA from brain EM images to HeLa cells and from isotropic FIB/SEM volumes to anisotropic TEM volumes. In all cases, the proposed method has outperformed the extended classification DA techniques and the finetuning baseline. An implementation of our work can be found on https://github.com/JorisRoels/domain-adaptivesegmentation.
The throughput of electron microscopes has increased significantly in recent years, enabling detailed analysis of cell morphology and ultrastructure in fairly large tissue volumes. Analysis of neural circuits at single-synapse resolution remains the flagship target of this technique, but applications to cell and developmental biology are also starting to emerge at scale. On the light microscopy side, continuous development of light-sheet microscopes has led to a rapid increase in imaged volume dimensions, making Terabyte-scale acquisitions routine in the field.The amount of data acquired in such studies makes manual instance segmentation, a fundamental step in many analysis pipelines, impossible. While automatic segmentation approaches have improved significantly thanks to the adoption of convolutional neural networks, their accuracy still lags behind human annotations and requires additional manual proof-reading. A major hindrance to further improvements is the limited field of view of the segmentation networks preventing them from learning to exploit the expected cell morphology or other prior biological knowledge which humans use to inform their segmentation decisions. In this contribution, we show how such domain-specific information can be leveraged by expressing it as longrange interactions in a graph partitioning problem known as the lifted multicut problem. Using this formulation, we demonstrate significant improvement in segmentation accuracy for four challenging boundary-based segmentation problems from neuroscience and developmental biology.
Segmentation of large-volume datasets obtained by volume SEM techniques is a challenging task that generally requires a considerable amount of human effort. Despite recent advances in deep learning leading to the successful segmentation of cellular organelles in a variety of datasets, it is still challenging and time-consuming to produce the necessary data for training a convolutional neural network as well as to set up targeted post-processing pipelines to obtain a good quality full-volume semantic instance segmentation. We present CebraEM, a software package that uses a novel workflow for the segmentation of organelles in volume EM datasets, which helps to minimize the annotation time for the generation of training data. It relies on a generic CNN-based membrane prediction, followed by a well-established machine-learning pipeline that includes over-segmentation before random forest classification and graph multi-cut grouping. The workflow was tested for the segmentation of organelles on different datasets originating from various sample preparations and imaging modalities in volume SEM, in each case resulting in state-of-the-art semantic instance segmentations without additional post-processing. Importantly, by considerably simplifying the segmentation problem, CebraEM empowers single users with the ability to efficiently segment hundreds of gigabytes of data.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.