The effective segmentation of esophagus and esophagus tumors from Computed Tomography (CT) images can meaningfully assist doctors in the diagnosis and treatment of esophageal cancer patients. However, problems such as the small proportion of esophageal region in CT images and the irregular shape of esophagus will make the diagnosis difficult. In practical applications, not all esophagus and esophageal cancer morphology can be included in the training set, so the generalization ability of the model is very important. These are the difficulties in segmenting the esophagus and esophageal cancer. Since some adjacent tissues and organs of the esophagus are visually close to esophagus and esophageal cancer, how to ensure that the network can extract effective distinguishing features has become the focus of research. In this paper, a novel U-Net structure ─ Channel-attention U-Net is proposed to segment esophagus and esophagus cancer from CT slices. This novel network combines a Channel Attention Module (CAM) that can distinguish esophagus and surrounding tissues by emphasizing and inhibiting channel feature and Cross-level Feature Fusion Module (CFFM) which is utilized to strengthen the generalization ability of the network by using high-level features to weight low-level features. Because the high-level features represent specific organizational information, and the low-level features represent the characteristics of detailed information such as edges and contours, the network can learn specific detailed features of a definite organization. In addition, in order to locate the esophageal region better, a 3D semi-automatic method for segmenting esophagus and esophageal cancer is proposed. The proposed network is trained using 46,400 CT pictures as the training set and divides 11,600 CT images from the dataset at a ratio of 0.2 as the validation set. Finally, 7,250 CT images were used as the test set to test the performance of the network. The experimental results show that the IoU value of our network can reach 0.625, the dice value is 0.732 and Hausdorff distance is 3.193. INDEX TERMS Esophageal cancer, channel attention mechanism, deep learning, computed tomography (CT).
We propose a deep learning computational ghost imaging (CGI) scheme to achieve sub-Nyquist and high-quality image reconstruction. Unlike the second-order-correlation CGI and compressive-sensing CGI, which use lots of illumination patterns and a one-dimensional (1-D) light intensity sequence (LIS) for image reconstruction, a deep neural network (DAttNet) is proposed to restore the target image only using the 1-D LIS. The DAttNet is trained with simulation data and retrieves the target image from experimental data. The experimental results indicate that the proposed scheme can provide high-quality images with a sub-Nyquist sampling ratio and performs better than the conventional and compressive-sensing CGI methods in sub-Nyquist sampling ratio conditions (e.g., 5.45%). The proposed scheme has potential practical applications in underwater, real-time and dynamic CGI.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.