This paper proposes a convolutional neural network (CNN)-based deep learning model for predicting the difficulty of extracting a mandibular third molar using a panoramic radiographic image. The applied dataset includes a total of 1053 mandibular third molars from 600 preoperative panoramic radiographic images. The extraction difficulty was evaluated based on the consensus of three human observers using the Pederson difficulty score (PDS). The classification model used a ResNet-34 pretrained on the ImageNet dataset. The correlation between the PDS values determined by the proposed model and those measured by the experts was calculated. The prediction accuracies for C1 (depth), C2 (ramal relationship), and C3 (angulation) were 78.91%, 82.03%, and 90.23%, respectively. The results confirm that the proposed CNN-based deep learning model could be used to predict the difficulty of extracting a mandibular third molar using a panoramic radiographic image.
Background Posteroanterior and lateral cephalogram have been widely used for evaluating the necessity of orthognathic surgery. The purpose of this study was to develop a deep learning network to automatically predict the need for orthodontic surgery using cephalogram. Methods The cephalograms of 840 patients (Class ll: 244, Class lll: 447, Facial asymmetry: 149) complaining about dentofacial dysmorphosis and/or a malocclusion were included. Patients who did not require orthognathic surgery were classified as Group I (622 patients—Class ll: 221, Class lll: 312, Facial asymmetry: 89). Group II (218 patients—Class ll: 23, Class lll: 135, Facial asymmetry: 60) was set for cases requiring surgery. A dataset was extracted using random sampling and was composed of training, validation, and test sets. The ratio of the sets was 4:1:5. PyTorch was used as the framework for the experiment. Results Subsequently, 394 out of a total of 413 test data were properly classified. The accuracy, sensitivity, and specificity were 0.954, 0.844, and 0.993, respectively. Conclusion It was found that a convolutional neural network can determine the need for orthognathic surgery with relative accuracy when using cephalogram.
Objective: The aim of this study was to evaluate the use of a convolutional neural network (CNN) system for predicting C-shaped canals in mandibular second molars on panoramic radiographs. Methods: Panoramic and cone beam CT (CBCT) images obtained from June 2018 to May 2020 were screened and 1020 patients were selected. Our dataset of 2040 sound mandibular second molars comprised 887 C-shaped canals and 1153 non-C-shaped canals. To confirm the presence of a C-shaped canal, CBCT images were analyzed by a radiologist and set as the gold standard. A CNN-based deep-learning model for predicting C-shaped canals was built using Xception. The training and test sets were set to 80 to 20%, respectively. Diagnostic performance was evaluated using accuracy, sensitivity, specificity, and precision. Receiver-operating characteristics (ROC) curves were drawn, and the area under the curve (AUC) values were calculated. Further, gradient-weighted class activation maps (Grad-CAM) were generated to localize the anatomy that contributed to the predictions. Results: The accuracy, sensitivity, specificity, and precision of the CNN model were 95.1, 92.7, 97.0, and 95.9%, respectively. Grad-CAM analysis showed that the CNN model mainly identified root canal shapes converging into the apex to predict the C-shaped canals, while the root furcation was predominantly used for predicting the non-C-shaped canals. Conclusions: The deep-learning system had significant accuracy in predicting C-shaped canals of mandibular second molars on panoramic radiographs.
Facial photographs of the subjects are often used in the diagnosis process of orthognathic surgery. The aim of this study was to determine whether convolutional neural networks (CNNs) can judge soft tissue profiles requiring orthognathic surgery using facial photographs alone. 822 subjects with dentofacial dysmorphosis and / or malocclusion were included. Facial photographs of front and right side were taken from all patients. Subjects who did not need orthognathic surgery were classified as Group I (411 subjects). Group II (411 subjects) was set up for cases requiring surgery. CNNs of VGG19 was used for machine learning. 366 of the total 410 data were correctly classified, yielding 89.3% accuracy. The values of accuracy, precision, recall, and F1 scores were 0.893, 0.912, 0.867, and 0.889, respectively. As a result of this study, it was found that CNNs can judge soft tissue profiles requiring orthognathic surgery relatively accurately with the photographs alone.
PurposeThis study proposes a new ball-type phantom for evaluation of the image layer of panoramic radiography.Materials and MethodsThe arch shape of an acrylic resin phantom was derived from average data on the lower dental arch in Korean adult males. Metal balls with a 2-mm diameter were placed along the center line of the phantom at a 4-mm mesiodistal interval. Additional metal balls were placed along the 22 arch-shaped lines that ran parallel to the center line at 2-mm buccolingual intervals. The height of each ball in the horizontal plane was spaced by 2.5 mm, and consequently, the balls appeared oblique when viewed from the side. The resulting phantom was named the Panorama phantom. The distortion rate of the balls in the acquired image was measured by automatically calculating the difference between the vertical and horizontal length using MATLAB®. Image layer boundaries were obtained by applying various distortion rate thresholds.ResultsMost areas containing metal balls (91.5%) were included in the image layer with a 50% distortion rate threshold. When a 5% distortion rate threshold was applied, the image layer was formed with a small buccolingual width along the arch-shaped center line. However, it was medially located in the temporomandibular joint region.ConclusionThe Panorama phantom could be used to evaluate the image layer of panoramic radiography, including all mesiodistal areas with large buccolingual width.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.