The automatic analysis of endoscopic images to assist endoscopists in accurately identifying the types and locations of esophageal lesions remains a challenge. In this paper, we propose a novel multi-task deep learning model for automatic diagnosis, which does not simply replace the role of endoscopists in decision making, because endoscopists are expected to correct the false results predicted by the diagnosis system if more supporting information is provided. In order to help endoscopists improve the diagnosis accuracy in identifying the types of lesions, an image retrieval module is added in the classification task to provide an additional confidence level of the predicted types of esophageal lesions. In addition, a mutual attention module is added in the segmentation task to improve its performance in determining the locations of esophageal lesions. The proposed model is evaluated and compared with other deep learning models using a dataset of 1003 endoscopic images, including 290 esophageal cancer, 473 esophagitis, and 240 normal. The experimental results show the promising performance of our model with a high accuracy of 96.76% for the classification and a Dice coefficient of 82.47% for the segmentation. Consequently, the proposed multi-task deep learning model can be an effective tool to help endoscopists in judging esophageal lesions.
It is challenging for endoscopists to accurately detect esophageal lesions during gastrointestinal endoscopic screening due to visual similarities among different lesions in terms of shape, size, and texture among patients. Additionally, endoscopists are busy fighting esophageal lesions every day, hence the need to develop a computer-aided diagnostic tool to classify and segment the lesions at endoscopic images to reduce their burden. Therefore, we propose a multi-task classification and segmentation (MTCS) model, including the Esophageal Lesions Classification Network (ELCNet) and Esophageal Lesions Segmentation Network (ELSNet). The ELCNet was used to classify types of esophageal lesions, and the ELSNet was used to identify lesion regions. We created a dataset by collecting 805 esophageal images from 255 patients and 198 images from 64 patients to train and evaluate the MTCS model. Compared with other methods, the proposed not only achieved a high accuracy (93.43%) in classification but achieved a dice similarity coefficient (77.84%) in segmentation. In conclusion, the MTCS model can boost the performance of endoscopists in the detection of esophageal lesions as it can accurately multi-classify and segment the lesions and is a potential assistant for endoscopists to reduce the risk of oversight.
Image deraining is increasingly critical in the domain of computer vision. However, there is a lack of fast deraining algorithms for multiple images without temporal and spatial features. To fill this gap, an efficient image-deraining algorithm based on dual adjacent indexing and multi-deraining layers is proposed to increase deraining efficiency. The deraining operation is based on two proposals: the dual adjacent method and the joint training method based on multi-deraining layers. The dual adjacent structure indexes pixels from adjacent features of the previous layer to merge with features produced by deraining layers, and the merged features are reshaped to prepare for the loss computation. Joint training method is based on multi-deraining layers, which utilise the pixelshuffle operation to prepare various deraining features for the multi-loss functions. Multi-loss functions jointly compute the structural similarity by loss calculation based on reshaped and deraining features. The features produced by the four deraining layers are concatenated in the channel dimension to obtain the total structural similarity and mean square error. During the experiments, the proposed deraining model is relatively efficient in primary rain datasets, reaching more than 200 fps, and maintains relatively impressive results in single and crossing datasets, demonstrating that our deraining model reaches one of the most advanced ranks in the domain of rain-removing.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.