Diffuse correlation spectroscopy (DCS), combined with time-resolved reflectance spectroscopy (TRS) or frequency domain spectroscopy, aims at path length (i.e. depth) resolved, non-invasive and simultaneous assessment of tissue composition and blood flow. However, while TRS provides a path length resolved data, the standard DCS does not. Recently, a time domain DCS experiment showed path length resolved measurements for improved quantification with respect to classical DCS, but was limited to phantoms and small animal studies. Here, we demonstrate time domain DCS for studies on the adult forehead and the arm. We achieve path length resolved DCS by means of an actively mode-locked Ti:Sapphire laser that allows high coherence pulses, thus enabling adequate signal-to-noise ratio in relatively fast (~1 s) temporal resolution. This work paves the way to the translation of this approach to practical use.
Background: The most tedious and time-consuming task in medical additive manufacturing (AM) is image segmentation. The aim of the present study was to develop and train a convolutional neural network (CNN) for bone segmentation in computed tomography (CT) scans. Method: The CNN was trained with CT scans acquired using six different scanners. Standard tessellation language (STL) models of 20 patients who had previously undergone craniotomy and cranioplasty using additively manufactured skull implants served as "gold standard" models during CNN training. The CNN segmented all patient CT scans using a leave-2-out scheme. All segmented CT scans were converted into STL models and geometrically compared with the gold standard STL models. Results: The CT scans segmented using the CNN demonstrated a large overlap with the gold standard segmentation and resulted in a mean Dice similarity coefficient of 0.92 ± 0.04. The CNN-based STL models demonstrated mean surface deviations ranging between −0.19 mm ± 0.86 mm and 1.22 mm ± 1.75 mm, when compared to the gold standard STL models. No major differences were observed between the mean deviations of the CNN-based STL models acquired using six different CT scanners. Conclusions: The fully-automated CNN was able to accurately segment the skull. CNNs thus offer the opportunity of removing the current prohibitive barriers of time and effort during CT image segmentation, making patientspecific AM constructs more accesible.
Accurate segmentation of the jaw (i.e., mandible and maxilla) and the teeth in cone beam computed tomography (CBCT) scans is essential for orthodontic diagnosis and treatment planning. Although various (semi)automated methods have been proposed to segment the jaw or the teeth, there is still a lack of fully automated segmentation methods that can simultaneously segment both anatomic structures in CBCT scans (i.e., multiclass segmentation). In this study, we aimed to train and validate a mixed-scale dense (MS-D) convolutional neural network for multiclass segmentation of the jaw, the teeth, and the background in CBCT scans. Thirty CBCT scans were obtained from patients who had undergone orthodontic treatment. Gold standard segmentation labels were manually created by 4 dentists. As a benchmark, we also evaluated MS-D networks that segmented the jaw or the teeth (i.e., binary segmentation). All segmented CBCT scans were converted to virtual 3-dimensional (3D) models. The segmentation performance of all trained MS-D networks was assessed by the Dice similarity coefficient and surface deviation. The CBCT scans segmented by the MS-D network demonstrated a large overlap with the gold standard segmentations (Dice similarity coefficient: 0.934 ± 0.019, jaw; 0.945 ± 0.021, teeth). The MS-D network–based 3D models of the jaw and the teeth showed minor surface deviations when compared with the corresponding gold standard 3D models (0.390 ± 0.093 mm, jaw; 0.204 ± 0.061 mm, teeth). The MS-D network took approximately 25 s to segment 1 CBCT scan, whereas manual segmentation took about 5 h. This study showed that multiclass segmentation of jaw and teeth was accurate and its performance was comparable to binary segmentation. The MS-D network trained for multiclass segmentation would therefore make patient-specific orthodontic treatment more feasible by strongly reducing the time required to segment multiple anatomic structures in CBCT scans.
Purpose: In order to attain anatomical models, surgical guides and implants for computer-assisted surgery, accurate segmentation of bony structures in cone-beam computed tomography (CBCT) scans is required. However, this image segmentation step is often impeded by metal artifacts. Therefore, this study aimed to develop a mixed-scale dense convolutional neural network (MS-D network) for bone segmentation in CBCT scans affected by metal artifacts. Method: Training data were acquired from 20 dental CBCT scans affected by metal artifacts. An experienced medical engineer segmented the bony structures in all CBCT scans using global thresholding and manually removed all remaining noise and metal artifacts. The resulting gold standard segmentations were used to train an MS-D network comprising 100 convolutional layers using far fewer trainable parameters than alternative convolutional neural network (CNN) architectures. The bone segmentation performance of the MS-D network was evaluated using a leave-2-out scheme and compared with a clinical snake evolution algorithm and two state-of-the-art CNN architectures (U-Net and ResNet). All segmented CBCT scans were subsequently converted into standard tessellation language (STL) models and geometrically compared with the gold standard. Results: CBCT scans segmented using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean Dice similarity coefficients of 0.87 AE 0.06, 0.87 AE 0.07, 0.86 AE 0.05, and 0.78 AE 0.07, respectively. The STL models acquired using the MS-D network, U-Net, ResNet and the snake evolution algorithm demonstrated mean absolute deviations of 0.44 mm AE 0.13 mm, 0.43 mm AE 0.16 mm, 0.40 mm AE 0.12 mm and 0.57 mm AE 0.22 mm, respectively. In contrast to the MS-D network, the ResNet introduced wave-like artifacts in the STL models, whereas the U-Net incorrectly labeled background voxels as bone around the vertebrae in 4 of the 9 CBCT scans containing vertebrae. Conclusion: The MS-D network was able to accurately segment bony structures in CBCT scans affected by metal artifacts.
Computer-assisted surgery (CAS) allows clinicians to personalize treatments and surgical interventions and has therefore become an increasingly popular treatment modality in maxillofacial surgery. The current maxillofacial CAS consists of three main steps: 1) computed tomography (CT) image reconstruction, 2) bone segmentation, and 3) surgical planning. However, each of these three steps can introduce errors that can heavily affect the treatment outcome. As a consequence, tedious and time-consuming manual post-processing is often necessary to ensure that each step is performed adequately. One way to overcome this issue is by developing and implementing neural networks (NNs) within the maxillofacial CAS workflow. These learning algorithms can be trained to perform specific tasks without the need for explicitly defined rules. In recent years, an extremely large number of novel NN approaches have been proposed for a wide variety of applications, which makes it a difficult task to keep up with all relevant developments. This study therefore aimed to summarize and review all relevant NN approaches applied for CT image reconstruction, bone segmentation, and surgical planning. After full text screening, 76 publications were identified: 32 focusing on CT image reconstruction, 33 focusing on bone segmentation and 11 focusing on surgical planning. Generally, convolutional neural networks were most widely used in the identified studies, although the multilayer perceptron was most commonly applied in surgical planning tasks. Moreover, the drawbacks of current approaches and promising research avenues are discussed.
High cone-angle artifacts (HCAAs) appear frequently in circular cone-beam computed tomography (CBCT) images and can heavily affect diagnosis and treatment planning. To reduce HCAAs in CBCT scans, we propose a novel deep learning approach that reduces the three-dimensional (3D) nature of HCAAs to two-dimensional (2D) problems in an efficient way. Specifically, we exploit the relationship between HCAAs and the rotational scanning geometry by training a convolutional neural network (CNN) using image slices that were radially sampled from CBCT scans. We evaluated this novel approach using a dataset of input CBCT scans affected by HCAAs and high-quality artifact-free target CBCT scans. Two different CNN architectures were employed, namely U-Net and a mixed-scale dense CNN (MS-D Net). The artifact reduction performance of the proposed approach was compared to that of a Cartesian slice-based artifact reduction deep learning approach in which a CNN was trained to remove the HCAAs from Cartesian slices. In addition, all processed CBCT scans were segmented to investigate the impact of HCAAs reduction on the quality of CBCT image segmentation. We demonstrate that the proposed deep learning approach with geometry-aware dimension reduction greatly reduces HCAAs in CBCT scans and outperforms the Cartesian slice-based deep learning approach. Moreover, the proposed artifact reduction approach markedly improves the accuracy of the subsequent segmentation task compared to the Cartesian slice-based workflow.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.