Novel corona virus pneumonia (COVID-19) broke out in 2019, which had a great impact on the development of world economy and people's lives. As a new mainstream image processing method, deep learning network has been constructed to extract medical features from chest CT images, and has been used as a new detection method in clinical practice. However, due to the medical characteristics of COVID-19 CT images, the lesions are widely distributed and have many local features. Therefore, it is difficult to diagnose directly by using the existing deep learning model. According to the medical features of CT images in COVID-19, a parallel bi-branch model (Trans-CNN Net) based on Transformer module and Convolutional Neural Network module is proposed by making full use of the local feature extraction capability of Convolutional Neural Network and the global feature extraction advantage of Transformer. According to the principle of cross-fusion, a bi-directional feature fusion structure is designed, in which features extracted from two branches are fused bi-directionally, and the parallel structures of branches are fused by a feature fusion module, forming a model that can extract features of different scales. To verify the effect of network classification, the classification accuracy on COVIDx-CT dataset is 96.7%, which is obviously higher than that of typical CNN network (ResNet-152) (95.2%) and Transformer network (Deit-B) (75.8%). These results demonstrate accuracy is improved. This model also provides a new method for the diagnosis of COVID-19, and through the combination of deep learning and medical imaging, it promotes the development of real-time diagnosis of lung diseases caused by COVID-19 infection, which is helpful for reliable and rapid diagnosis, thus saving precious lives.
Background: Predicting the mutation statuses of 2 essential pathogenic genes [epidermal growth factor receptor (EGFR) and Kirsten rat sarcoma (KRAS)] in non-small cell lung cancer (NSCLC) based on CT is valuable for targeted therapy because it is a non-invasive and less costly method. Although deep learning technology has realized substantial computer vision achievements, CT imaging being used to predict gene mutations remains challenging due to small dataset limitations.
Methods:We propose a multi-channel and multi-task deep learning (MMDL) model for the simultaneous prediction of EGFR and KRAS mutation statuses based on CT images. First, we decomposed each 3D lung nodule into 9 views. Then, we used the pre-trained inception-attention-resnet model for each view to learn the features of the nodules. By combining 9 inception-attention-resnet models to predict the types of gene mutations in lung nodules, the models were adaptively weighted, and the proposed MMDL model could be trained end-to-end. The MMDL model utilized multiple channels to characterize the nodule more comprehensively and integrate patient personal information into our learning process.
Results:We trained the proposed MMDL model using a dataset of 363 patients collected by our partner hospital and conducted a multi-center validation on 162 patients in The Cancer Imaging Archive (TCIA) public dataset. The accuracies for the prediction of EGFR and KRAS mutations were, respectively, 79.43% and 72.25% in the training dataset and 75.06% and 69.64% in the validation dataset.
Conclusions:The experimental results demonstrated that the proposed MMDL model outperformed the latest methods in predicting EGFR and KRAS mutations in NSCLC.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.