In this paper, we propose a novel deep learning framework for anatomy segmentation and automatic landmarking. Specifically, we focus on the challenging problem of mandible segmentation from cone-beam computed tomography (CBCT) scans and identification of 9 anatomical landmarks of the mandible on the geodesic space. The overall approach employs three inter-related steps. In step 1, we propose a deep neural network architecture with carefully designed regularization, and network hyper-parameters to perform image segmentation without the need for data augmentation and complex postprocessing refinement. In step 2, we formulate the landmark localization problem directly on the geodesic space for sparselyspaced anatomical landmarks. In step 3, we propose to use a long short-term memory (LSTM) network to identify closelyspaced landmarks, which is rather difficult to obtain using other standard detection networks. The proposed fully automated method showed superior efficacy compared to the state-of-theart mandible segmentation and landmarking approaches in craniofacial anomalies and diseased states. We used a very challenging CBCT dataset of 50 patients with a high-degree of craniomaxillofacial (CMF) variability that is realistic in clinical practice. Complementary to the quantitative analysis, the qualitative visual inspection was conducted for distinct CBCT scans from 250 patients with high anatomical variability. We have also shown feasibility of the proposed work in an independent dataset from MICCAI Head-Neck Challenge (2015) achieving the state-of-the-art performance. Lastly, we present an in-depth analysis of the proposed deep networks with respect to the choice of hyper-parameters such as pooling and activation functions.
We present an experimental demonstration of a subwavelength diffraction grating performing first-order differentiation of the transverse profile of an incident optical beam with respect to a spatial variable. The experimental results are in a good agreement with the presented analytical model suggesting that the differentiation is performed in transmission at oblique incidence and is associated with the guided-mode resonance of the grating. According to this model, the transfer function of the grating in the vicinity of the resonance is close to the transfer function of an exact differentiator. We confirm this by estimating the transfer function of the fabricated structure on the basis of the measured profiles of the incident and transmitted beams. The considered structure may find application in the design of new photonic devices for beam shaping, optical information processing, and analog optical computing.
Mandible bone segmentation from computed tomography (CT) scans is challenging due to mandible's structural irregularities, complex shape patterns, and lack of contrast in joints. Furthermore, connections of teeth to mandible and mandible to remaining parts of the skull make it extremely difficult to identify mandible boundary automatically. This study addresses these challenges by proposing a novel framework where we define the segmentation as two complementary tasks: recognition and delineation. For recognition, we use random forest regression to localize mandible in 3D. For delineation, we propose to use 3D gradient-based fuzzy connectedness (FC) image segmentation algorithm, operating on the recognized mandible sub-volume. Despite heavy CT artifacts and dental fillings, consisting half of the CT image data in our experiments, we have achieved highly accurate detection and delineation results. Specifically, detection accuracy more than 96% (measured by union of intersection (UoI)), the delineation accuracy of 91% (measured by dice similarity coefficient), and less than 1 mm in shape mismatch (Hausdorff Distance) were found.
As cone-beam computed tomography (CBCT) scans become increasingly common, it is vital to have reliable 3-dimensional (3D) landmarks for quantitative analysis of craniofacial skeletal morphology. While some studies have developed and used 3D landmarks, these landmark sets are generally small and derived primarily from previous 2-dimensional (2D) cephalometric landmarks. These derived landmarks lack information in parts of the skull such as the cranial base, which is an important feature for cranial growth and development. The authors see a real need for development and validation of 3D landmarks, particularly bilateral landmarks, across the skull for improved cephalometric analysis. The primary objective of this study is to develop and validate a set of 61 3D anatomical landmarks on the face, cranial base, mandible, and teeth for use in clinical and research studies involving CBCT imaging. Each landmark was placed 3 times by 3 separate trained observers on a set of 10 anonymized CBCT patient scans. Intra-rater and inter-rater estimates of consistency and agreement were calculated using the intraclass correlation coefficient. Measurement error was calculated per landmark and per X, Y, and Z landmark coordinate. The authors had high ICC estimates within rates, indicating high consistency, and high ICC estimates among raters, indicate good agreement across raters. Overall measurement error for each landmark and each X, Y, and Z coordinate was low. Our results confirm the accuracy of novel 3D landmarks including several on the cranial base that will serve researchers and clinicians for use in future studies involving 3D CBCT imaging and craniofacial development.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.