We propose a novel deep-learning-based system for vessel segmentation. Existing methods using CNNs have mostly relied on local appearances learned on the regular image grid, without considering the graphical structure of vessel shape. To address this, we incorporate a graph convolutional network into a unified CNN architecture, where the final segmentation is inferred by combining the different types of features. The proposed method can be applied to expand any type of CNN-based vessel segmentation method to enhance the performance. Experiments show that the proposed method outperforms the current state-of-the-art methods on two retinal image datasets as well as a coronary artery X-ray angiography dataset.
We propose a framework for localization and classification of masses in breast ultrasound (BUS) images. We have experimentally found that training convolutional neural network based mass detectors with large, weakly annotated datasets presents a non-trivial problem, while overfitting may occur with those trained with small, strongly annotated datasets. To overcome these problems, we use a weakly annotated dataset together with a smaller strongly annotated dataset in a hybrid manner. We propose a systematic weakly and semi-supervised training scenario with appropriate training loss selection. Experimental results show that the proposed method can successfully localize and classify masses with less annotation effort. The results trained with only 10 strongly annotated images along with weakly annotated images were comparable to results trained from 800 strongly annotated images, with the 95% confidence interval of difference -3.00%-5.00%, in terms of the correct localization (CorLoc) measure, which is the ratio of images with intersection over union with ground truth higher than 0.5. With the same number of strongly annotated images, additional weakly annotated images can be incorporated to give a 4.5% point increase in CorLoc, from 80.00% to 84.50% (with 95% confidence intervals 76.00%-83.75% and 81.00%-88.00%). The effects of different algorithmic details and varied amount of data are presented through ablative analysis.Index Terms-Breast ultrasound, convolutional neural networks, mass classification, mass localization, semisupervised learning, weakly supervised learning.
A scene text spotter is composed of text detection and recognition modules. Many studies have been conducted to unify these modules into an end-to-end trainable model to achieve better performance. A typical architecture places detection and recognition modules into separate branches, and a RoI pooling is commonly used to let the branches share a visual feature. However, there still exists a chance of establishing a more complimentary connection between the modules when adopting recognizer that uses attention-based decoder and detector that represents spatial information of the character regions. This is possible since the two modules share a common sub-task which is to find the location of the character regions. Based on the insight, we construct a tightly coupled single pipeline model. This architecture is formed by utilizing detection outputs in the recognizer and propagating the recognition loss through the detection stage. The use of character score map helps the recognizer attend better to the character center points, and the recognition loss propagation to the detector module enhances the localization of the character regions. Also, a strengthened sharing stage allows feature rectification and boundary localization of arbitrary-shaped text regions. Extensive experiments demonstrate state-of-the-art performance in publicly available straight and curved benchmark dataset.
Objectives To develop a convolutional neural network system to jointly segment and classify a hepatic lesion selected by user clicks in ultrasound images. Methods In total, 4309 anonymized ultrasound images of 3873 patients with hepatic cyst (n = 1214), hemangioma (n = 1220), metastasis (n = 1001), or hepatocellular carcinoma (HCC) (n = 874) were collected and annotated. The images were divided into 3909 training and 400 test images. Our network is composed of one shared encoder and two inference branches used for segmentation and classification and takes the concatenation of an input image and two Euclidean distance maps of foreground and background clicks provided by a user as input. The performance of hepatic lesion segmentation was evaluated based on the Jaccard index (JI), and the performance of classification was based on accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUROC). Results We achieved performance improvements by jointly conducting segmentation and classification. In the segmentation only system, the mean JI was 68.5%. In the classification only system, the accuracy of classifying four types of hepatic lesions was 79.8%. The mean JI and classification accuracy were 68.5% and 82.2%, respectively, for the proposed joint system. The optimal sensitivity and specificity and the AUROC of classifying benign and malignant hepatic lesions of the joint system were 95.0%, 86.0%, and 0.970, respectively. The respective sensitivity, specificity, and the AUROC for classifying four hepatic lesions of the joint system were 86.7%, 89.7%, and 0.947. Conclusions The proposed joint system exhibited fair performance compared to segmentation only and classification only systems. Key Points • The joint segmentation and classification system using deep learning accurately segmented and classified hepatic lesions selected by user clicks in US examination. • The joint segmentation and classification system for hepatic lesions in US images exhibited higher performance than segmentation only and classification only systems. • The joint segmentation and classification system could assist radiologists with minimal experience in US imaging by characterizing hepatic lesions.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.