Cellular phenotypes are observable characteristics of cells resulting from the interactions of intrinsic and extrinsic chemical or biochemical factors. Image-based phenotypic screens under large numbers of basal or perturbed conditions can be used to study the influences of these factors on cellular phenotypes. Hundreds to thousands of phenotypic descriptors can also be quantified from the images of cells under each of these experimental conditions. Therefore, huge amounts of data can be generated, and the analysis of these data has become a major bottleneck in large-scale phenotypic screens. Here, we review current experimental and computational methods for large-scale image-based phenotypic screens. Our focus is on phenotypic profiling, a computational procedure for constructing quantitative and compact representations of cellular phenotypes based on the images collected in these screens.
Background Drosophila melanogaster is an important organism used in many fields of biological research such as genetics and developmental biology. Drosophila wings have been widely used to study the genetics of development, morphometrics and evolution. Therefore there is much interest in quantifying wing structures of Drosophila. Advancement in technology has increased the ease in which images of Drosophila can be acquired. However such studies have been limited by the slow and tedious process of acquiring phenotypic data.ResultsWe have developed a system that automatically detects and measures key points and vein segments on a Drosophila wing. Key points are detected by performing image transformations and template matching on Drosophila wing images while vein segments are detected using an Active Contour algorithm.The accuracy of our key point detection was compared against key point annotations of users. We also performed key point detection using different training data sets of Drosophila wing images. We compared our software with an existing automated image analysis system for Drosophila wings and showed that our system performs better than the state of the art. Vein segments were manually measured and compared against the measurements obtained from our system.ConclusionOur system was able to detect specific key points and vein segments from Drosophila wing images with high accuracy.Electronic supplementary materialThe online version of this article (doi:10.1186/s12859-017-1720-y) contains supplementary material, which is available to authorized users.
Background High resolution 2D whole slide imaging provides rich information about the tissue structure. This information can be a lot richer if these 2D images can be stacked into a 3D tissue volume. A 3D analysis, however, requires accurate reconstruction of the tissue volume from the 2D image stack. This task is not trivial due to the distortions such as tissue tearing, folding and missing at each slide. Performing registration for the whole tissue slices may be adversely affected by distorted tissue regions. Consequently, regional registration is found to be more effective. In this paper, we propose a new approach to an accurate and robust registration of regions of interest for whole slide images. We introduce the idea of multi-scale attention for registration. Results Using mean similarity index as the metric, the proposed algorithm (mean ± SD $$0.84 \pm 0.11$$ 0.84 ± 0.11 ) followed by a fine registration algorithm ($$0.86 \pm 0.08$$ 0.86 ± 0.08 ) outperformed the state-of-the-art linear whole tissue registration algorithm ($$0.74 \pm 0.19$$ 0.74 ± 0.19 ) and the regional version of this algorithm ($$0.81 \pm 0.15$$ 0.81 ± 0.15 ). The proposed algorithm also outperforms the state-of-the-art nonlinear registration algorithm (original: $$0.82 \pm 0.12$$ 0.82 ± 0.12 , regional: $$0.77 \pm 0.22$$ 0.77 ± 0.22 ) for whole slide images and a recently proposed patch-based registration algorithm (patch size 256: $$0.79 \pm 0.16$$ 0.79 ± 0.16 , patch size 512: $$0.77 \pm 0.16$$ 0.77 ± 0.16 ) for medical images. Conclusion Using multi-scale attention mechanism leads to a more robust and accurate solution to the problem of regional registration of whole slide images corrupted in some parts by major histological artifacts in the imaged tissue.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.