We introduce a method for training neural networks to perform image or volume segmentation in which prior knowledge about the topology of the segmented object can be explicitly provided and then incorporated into the training process. By using the differentiable properties of persistent homology, a concept used in topological data analysis, we can specify the desired topology of segmented objects in terms of their Betti numbers and then drive the proposed segmentations to contain the specified topological features. Importantly this process does not require any ground-truth labels, just prior knowledge of the topology of the structure being segmented. We demonstrate our approach in three experiments. Firstly we create a synthetic task in which handwritten MNIST digits are de-noised, and show that using this kind of topological prior knowledge in the training of the network significantly improves the quality of the de-noised digits. Secondly we perform an experiment in which the task is segmenting the myocardium of the left ventricle from cardiac magnetic resonance images. We show that the incorporation of the prior knowledge of the topology of this anatomy improves the resulting segmentations in terms of both the topological accuracy and the Dice coefficient. Thirdly, we extend the method to 3D volumes and demonstrate its performance on the task of segmenting the placenta from ultrasound data, again showing that incorporating topological priors improves performance on this challenging task. We find that embedding explicit prior knowledge in neural network segmentation tasks is most beneficial when the segmentation task is especially challenging and that it can be used in either a semi-supervised or post-processing context to extract a useful training gradient from images without pixelwise labels.
Insights into potential differences among the bony labyrinths of Plio-Pleistocene hominins may inform their evolutionary histories and sensory ecologies. We use four recently-discovered bony labyrinths from the site of Kromdraai to significantly expand the sample for Paranthropus robustus. Diffeomorphometry, which provides detailed information about cochlear shape, reveals size-independent differences in cochlear shape between P. robustus and Australopithecus africanus that exceed those among modern humans and the African apes. The cochlea of P. robustus is distinctive and relatively invariant, whereas cochlear shape in A. africanus is more variable, resembles that of early Homo, and shows a degree of morphological polymorphism comparable to that evinced by modern species. The curvature of the P. robustus cochlea is uniquely derived and is consistent with enhanced sensitivity to low-frequency sounds. Combined with evidence for selection, our findings suggest that sound perception shaped distinct ecological adaptations among southern African early hominins.
Automatically detecting acoustic shadows is of great importance for automatic 2D ultrasound analysis ranging from anatomy segmentation to landmark detection. However, variation in shape and similarity in intensity to other structures in the image make shadow detection a very challenging task. In this paper, we propose an automatic shadow detection method to generate a pixel-wise shadow confidence map from weakly labelled annotations. Our method jointly uses; (1) a feature attribution map from a Wasserstein GAN and (2) an intensity saliency map from a graph cut model. The proposed method accurately highlights the shadow areas in two 2D ultrasound datasets comprising standard view planes as acquired during fetal screening. Moreover, the proposed method outperforms the state-of-the-art quantitatively and improves failure cases for automatic biometric measurement.
Investigating the human brain in utero is important for researchers and clinicians seeking to understand early neurodevelopmental processes. With the advent of fast magnetic resonance imaging (MRI) techniques and the development of motion correction algorithms to obtain high-quality 3D images of the fetal brain, it is now possible to gain more insight into the ongoing maturational processes in the brain. In this article, we present a review of the major building blocks of the pipeline toward performing quantitative analysis of in vivo MRI of the developing brain and its potential applications in clinical settings. The review focuses on T1- and T2-weighted modalities, and covers state of the art methodologies involved in each step of the pipeline, in particular, 3D volume reconstruction, spatio-temporal modeling of the developing brain, segmentation, quantification techniques, and clinical applications. Hum Brain Mapp 38:2772-2787, 2017. © 2017 Wiley Periodicals, Inc.
Morphometric assessments of the dentition have played significant roles in hypotheses relating to taxonomic diversity among extinct hominins. In this regard, emphasis has been placed on the statistical appraisal of intraspecific variation to identify morphological criteria that convey maximum discriminatory power. Three-dimensional geometric morphometric (3D GM) approaches that utilize landmarks and semi-landmarks to quantify shape variation have enjoyed increasingly popular use over the past twenty-five years in assessments of the outer enamel surface (OES) and enamel-dentine junction (EDJ) of fossil molars. Recently developed diffeomorphic surface matching (DSM) methods that model the deformation between shapes have drastically reduced if not altogether eliminated potential methodological inconsistencies associated with the a priori identification of landmarks and delineation of semi-landmarks. As such, DSM has the potential to better capture the geometric details that describe tooth shape by accounting for both homologous and non-homologous (i.e., discrete) features, and permitting the statistical determination of geometric correspondence. We compare the discriminatory power of 3D GM and DSM in the evaluation of the OES and EDJ of mandibular permanent molars attributed to Australopithecus africanus, Paranthropus robustus and early Homo sp. from the sites of Sterkfontein and Swartkrans. For all three molars, classification and clustering scores demonstrate that DSM performs better at separating the A. africanus and P. robustus samples than does 3D GM. The EDJ provided the best results. Paranthropus robustus evinces greater morphological variability than A. africanus. The DSM assessment of the early Homo molar from Swartkrans reveals its distinctiveness from either australopith sample, and the "unknown" specimen from Sterkfontein (Stw 151) is notably more similar to Homo than to A. africanus.
Detecting acoustic shadows in ultrasound images is important in many clinical and engineering applications. Realtime feedback of acoustic shadows can guide sonographers to a standardized diagnostic viewing plane with minimal artifacts and can provide additional information for other automatic image analysis algorithms. However, automatically detecting shadow regions using learning-based algorithms is challenging because pixel-wise ground truth annotation of acoustic shadows is subjective and time consuming. In this paper we propose a weakly supervised method for automatic confidence estimation of acoustic shadow regions. Our method is able to generate a dense shadow-focused confidence map. In our method, a shadow-seg module is built to learn general shadow features for shadow segmentation, based on global image-level annotations as well as a small number of coarse pixel-wise shadow annotations. A transfer function is introduced to extend the obtained binary shadow segmentation to a reference confidence map. Additionally, a confidence estimation network is proposed to learn the mapping between input images and the reference confidence maps. This network is able to predict shadow confidence maps directly from input images during inference. We use evaluation metrics such as DICE, inter-class correlation and etc. to verify the effectiveness of our method. Our method is more consistent than human annotation, and outperforms the state-of-the-art quantitatively in shadow segmentation and qualitatively in confidence estimation of shadow regions. We further demonstrate the applicability of our method by integrating shadow confidence maps into tasks such as ultrasound image classification, multi-view image fusion and automated biometric measurements.
The fusion and combination of images from multiple modalities is important in many applications. Typically, this process consists of the alignment of the images and the combination of the complementary information. In this work, we focused on the former part and propose a multimodal image distance measure based on the commutativity of graph Laplacians. The eigenvectors of the image graph Laplacian, and thus the graph Laplacian itself, capture the intrinsic structure of the image's modality. Using Laplacian commutativity as a criterion of image structure preservation, we adapt the problem of finding the closest commuting operators to multimodal image registration. Hence, by using the relation between simultaneous diagonalization and commutativity of matrices, we compare multimodal image structures by means of the commutativity of their graph Laplacians. In this way, we avoid spectrum reordering schemes or additional manifold alignment steps which are necessary to ensure the comparability of eigenspaces across modalities. We show on synthetic and real datasets that this approach is applicable to dense rigid and non-rigid image registration. Results demonstrated that the proposed measure is able to deal with very challenging multimodal datasets and compares favorably to normalized mutual information, a de facto similarity measure for multimodal image registration.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.