In the field of cytogenetics, chromosome image analysis or karyotyping from metaphase images plays an imperative role in the diagnosis, prognosis and treatment assessment of different genetic disorders and cancers. This paper is a comprehensive review on different traditional and deep‐based techniques, which are utilized in the design of automated karyotyping systems (AKSs). By this review, a detailed methodology is suggested for the design of end‐to‐end automated karyotyping system (EEAKS) which portrays a sequential multi stage approach. Methods related to all the stages in EEAKS are systematically surveyed by exploring the state of the art literature. Datasets and performance measures incorporated in the past studies are explored. Even though numerous methods were proposed throughout the past three decades, a completely automated framework has not yet been acknowledged. Inferences from this study show that, while various traditional image processing strategies are utilized for pre‐processing and segmentation, machine learning techniques are used only for the classification purpose. In conventional classifiers, artificial neural networks are generally utilized even when the peak performance is given by support vector machines. However, owing to the recent prodigious breakthrough in computer vision, deep neural networks are progressively utilized for developing automated systems. It is seen that deep neural networks are not yet explored in the realm of pre‐processing stage of EEAKS. However, limited number of methods based on convolutional neural networks (CNN) are utilized in all other stages. This review recommends a hybrid CNN for the design of EEAKS, in which all the stages can be automated by sub CNNs. Methodology for generating sufficient datasets is also discussed here which is, indeed, required for further research in this area. This paper concludes with future research directions for the development of a fully automated end‐to‐end karyotyping system.
Accurate diagnosis and treatment of lung carcinoma depend on its pathological type and staging. Normally, pathological analysis is performed either by needle biopsy or surgery. Therefore, a noninvasive method to detect pathological types would be a good alternative. Hence, this work aims at categorizing different types of lung cancer from multimodality images. The proposed approach involves two stages. Initially, a Blind/Referenceless Image Spatial Quality Evaluator‐based approach is adopted to extract the slices having lung abnormalities from the dataset. The slices then are transferred to a novel shallow convolutional neural network model to detect adenocarcinoma, squamous cell carcinoma, and small cell carcinoma from multimodality images. The classifier efficacy is then investigated by comparing precision, recall, area under curve, and accuracy with pretrained models and existing methods. The results narrate that the suggested system outperformed with a testing accuracy of 95% in Positron emission tomography/computed tomography (PET/CT), 93% in CT images of the Lung‐PET‐CT‐DX dataset, and 98% in the Lung3 dataset. Furthermore, a kappa score of 0.92 in PET/CT of Lung‐PETCT‐DX and 0.98 in CT of Lung3 exhibited the effectiveness of the presented system in the field of lung cancer classification.
Cervical cell classification plays a key role in the computer-based screening and diagnosis. This article focuses on cell classification and grading to differentiate the stages of cervical dysplasia. The proposed framework uses a unification of wavelet transform and convolutional neural network (CNN) for segregating spectral and spatial features from papanicolaou stained (pap) smear images. A correlation-based feature selection is adopted to find relevant features from the CNN model. Random Forest classifier is then incorporated
Segmentation plays an essential role in the design of the automated karyotyping system (AKS). It is pivotal to segment interphase cells and other debris usually found in the input G metaphase images. The performance of AKSs is considerably less when interphase cells and debris are present in the input images. In this article, two semantic segmentation models are proposed. For this experiment, an annotated dataset is generated from the G banded metaphase images which are prepared at Regional Cancer Centre (RCC), Thiruvananthapuram, Kerala, India. Inspired by the results of UNet, a lighter version L‐UNet is developed and experimented with. It shows the validation IoU (Intersection over Union) of 0.9809 and F1‐score of 0.9903 on the RCC dataset and the test IoU of 0.9720 and F1‐score of 0.9858 on the CRCN‐NE dataset. As backbone semantic segmentation models are state of the art, an efficient model, Eff‐UNet, is also proposed here. In this model, EfficientNetB03 acts as the backbone that extracts powerful features and UNet acts as the decoder that predicts the segmentation map. It performs with the validation IoU of 0.9842 and F1‐score of 0.9920 on the RCC dataset and the test IoU of 0.7545 and F1‐score of 0.7778 on the CRCN‐NE dataset. To derive this model, 25 encoder–decoder architectures are evaluated with various top‐performing CNNs (convolutional neural networks) as encoders and segmentation networks as decoders. Results are further compared with various segmentation models and the best results are obtained from the proposed model.
Colon cancer has been reported to be one of the frequently diagnosed cancers and the leading cause of cancer deaths. Early detection and removal of malicious polyps, which are precursors of colon cancer, can enormously lessen the fatality rate. The detection and segmentation of polyps in colonoscopy is a challenging task even for an experienced colonoscopist, due to divergences in the size, shape, texture, and the close resemblance of polyps with the colon lining. Machine‐assisted detection, localization, and segmentation of polyps in the screening procedure can profoundly help the clinicians. Autoencoder‐based architectures used in polyp segmentation lack the efficiency in incorporating both local and long‐range pixel dependencies. To address the challenges in the automatic segmentation of colon polyps we propose an autoencoder architecture, augmented with a feature attention module in the decoder part. The salient features from RGB colonoscopic images are extracted using the residual skip‐connected autoencoder. The decoder attention module joins spatial subspace with feature subspace extracted from the deep residual convolutional neural network and enhances the feature weight for precise segmentation of polyp regions. Extensive experiments on four publicly available polyp datasets demonstrate that the proposed architecture provides very impressive performance in terms of segmentation metrics (Dice scores and Jaccard scores) when compared with the state‐of‐the‐art polyp segmentation approaches.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.