Temporal lobe injury (TLI), a form of nervous system damage in the brain, is a major neurological complication after radiation therapy (RT). TLI must be highly valued because of the irreversible brain injury. This paper aims to develop a predictive pipeline, called deep longitudinal feature representations (DLFR), to detect TLI at the presymptomatic stage accurately via the learning of effective deep longitudinal feature representations. DLFR characterizes high-level information and developmental changes within and across subjects The DLFR consists of four components: (i) extraction of deep features from a pretrained ResNet50 model; (ii) compression of learned highly representative features by the global max pooling; (iii) fusion of deep longitudinal features for the fully use of all follow-up data; (iv) random forest-based prediction of the diagnostic status. In total, 244 nasopharyngeal carcinoma patients before and after RT with a follow-up period of 0 ∼ 9 years were included for analysis. All patients were divided into four different latency groups, and the current latency was used for training to predict the diagnostic status of the next latency. The AUCs of the predicted three different latency groups using DLFR were 0.64 ± 0.11, 0.76 ± 0.10, and 0.88 ± 0.05, while those of radiomics features were 0.56 ± 0.06, 0.63 ± 0.03, and 0.53 ± 0.04, and those of histogram of oriented gradients features were 0.60 ± 0.09, 0.52 ± 0.03, and 0.58 ± 0.06. Most importantly, the AUCs of the predicted three different latency groups for white matter regions were 0.66 ± 0.10, 0.80 ± 0.09, and 0.78 ± 0.09. Our proposed method can dynamically detect TLI at the presymptomatic stage, which can enable the administration of preventive neurological intervention.
Locating lung field is a critical and fundamental processing stage in the automated analysis of chest radiographs (CXRs) for pulmonary disorders. During the routine examination of CXRs, using both frontal and lateral CXRs can benefit clinical diagnosis of cardiothoracic and lung diseases. However, the accurate segmentation of lung fields on both frontal and lateral CXRs is still challenging due to the blurry boundary of the lung field on lateral CXRs and the poor generalization ability of the models. Existing deep learning-based methods focused on lung field segmentation on frontal CXRs, and the generalization ability of these methods on the different type of CXRs (e.g., pediatric CXRs) and new lung diseases (e.g., COVID-19) has not been tested. In this paper, a view identification assisted fully convolutional network (VI-FCN) is proposed for the segmentation of lung fields on frontal and lateral CXRs simultaneously. The VI-FCN consists of an FCN branch for lung field segmentation and a view identification branch for identification of the frontal and lateral CXRs and for enhancing the lung field segmentation. To improve the generalization ability of VI-FCN, six public datasets and our frontal and lateral CXRs (over 2000 CXRs) were collected for training. The segmentation of lung fields on the Japanese Society of Radiological Technology (JSRT) dataset yields mean dice similarity coefficient (DSC) of 0.979 ± 0.008, mean Jaccard index (Ω) of 0.959 ± 0.016, and mean boundary distance (MBD) of 1.023 ± 0.487mm. Besides, the VI-FCN achieves mean DSC of 0.973 ± 0.010, mean Ω of 0.947 ± 0.018, and mean MBD of 1.923 ± 0.755mm for the segmentation of lung fields on our lateral CXRs. The experiments demonstrate the superior performance of the proposed VI-FCN over most of existing state-of-the-art methods. Moreover, the proposed VI-FCN achieves promising results on untrained pediatric CXRs and COVID-19 datasets.INDEX TERMS Chest radiographs, Lung field segmentation, Generalization ability, COVID-19.YUHUA XI received the Bachelor's degree in Biomedical Engineering from the Southern Medical University, Guangzhou, China, in 2018. She is currently pursuing the master of engineering degree with the Department of Biomedical Engineering, the Southern Medical University. Her research focuses on the segmentation of medical images and bone suppression in CXRs.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.