Abstract. Automatic localization and labeling of vertebra in 3D medical images plays an important role in many clinical tasks, including pathological diagnosis, surgical planning and postoperative assessment. However, the unusual conditions of pathological cases, such as the abnormal spine curvature, bright visual imaging artifacts caused by metal implants, and the limited field of view, increase the difficulties of accurate localization. In this paper, we propose an automatic and fast algorithm to localize and label the vertebra centroids in 3D CT volumes. First, we deploy a deep image-to-image network (DI2IN) to initialize vertebra locations, employing the convolutional encoder-decoder architecture together with multi-level feature concatenation and deep supervision. Next, the centroid probability maps from DI2IN are iteratively evolved with the message passing schemes based on the mutual relation of vertebra centroids. Finally, the localization results are refined with sparsity regularization. The proposed method is evaluated on a public dataset of 302 spine CT volumes with various pathologies. Our method outperforms other state-of-the-art methods in terms of localization accuracy. The run time is around 3 seconds on average per case. To further boost the performance, we retrain the DI2IN on additional 1000 + 3D CT volumes from different patients. To the best of our knowledge, this is the first time more than 1000 3D CT volumes with expert annotation are adopted in experiments for the anatomic landmark detection tasks. Our experimental results show that training with such a large dataset significantly improves the performance and the overall identification rate, for the first time by our knowledge, reaches 90 %.
Accurate detection and segmentation of anatomical structures from ultrasound images are crucial for clinical diagnosis and biometric measurements. Although ultrasound imaging has been widely used with superiorities such as low cost and portability, the fuzzy border definition and existence of abounding artifacts pose great challenges for automatically detecting and segmenting the complex anatomical structures. In this paper, we propose a multi-domain regularized deep learning method to address this challenging problem. By leveraging the transfer learning from cross domains, the feature representations are effectively enhanced. The results are further improved by the iterative refinement. Moreover, our method is quite efficient by taking advantage of a fully convolutional network, which is formulated as an end-to-end learning framework of detection and segmentation. Extensive experimental results on a large-scale database corroborated that our method achieved a superior detection and segmentation accuracy, outperforming other methods by a significant margin and demonstrating competitive capability even compared to human performance.
arXiv:1607.01855v1 [cs.CV] 7 Jul 2016Recently, fully convolutional networks (FCN), i.e, a variant of CNN, achieved the state-of-the-art performance on image segmentation related tasks [6]. Such
This paper proposes a fully automatic approach for computing Nuchal Translucency (NT) measurement in an ultrasound scans of the mid-sagittal plane of a fetal head. This is an improvement upon current NT measurement methods which require manual placement of NT measurement points or user-guidance in semi-automatic segmentation of the NT region. The algorithm starts by finding the pose of the fetal head using discriminative learning-based detectors. The fetal head serves as a robust anchoring structure and the NT region is estimated from the statistical relationship between the fetal head and the NT region. Next, the pose of the NT region is locally refined and its inner and outer edge approximately determined via Dijkstra's shortest path applied on the edge-enhanced image. Finally, these two region edges are used to define foreground and background seeds for accurate graph cut segmentation. The NT measurement is computed from the segmented region. Experiments show that the algorithm efficiently and effectively detects the NT region and provides accurate NT measurement which suggests suitability for clinical use.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.