Although visual examination (VE) is the preferred method for caries detection, the analysis of intraoral digital photographs in machine-readable form can be considered equivalent to VE. While photographic images are rarely used in clinical practice for diagnostic purposes, they are the fundamental requirement for automated image analysis when using artificial intelligence (AI) methods. Considering that AI has not been used for automatic caries detection on intraoral images so far, this diagnostic study aimed to develop a deep learning approach with convolutional neural networks (CNNs) for caries detection and categorization (test method) and to compare the diagnostic performance with respect to expert standards. The study material consisted of 2,417 anonymized photographs from permanent teeth with 1,317 occlusal and 1,100 smooth surfaces. All the images were evaluated into the following categories: caries free, noncavitated caries lesion, or caries-related cavitation. Each expert diagnosis served as a reference standard for cyclic training and repeated evaluation of the AI methods. The CNN was trained using image augmentation and transfer learning. Before training, the entire image set was divided into a training and test set. Validation was conducted by selecting 25%, 50%, 75%, and 100% of the available images from the training set. The statistical analysis included calculations of the sensitivity (SE), specificity (SP), and area under the receiver operating characteristic (ROC) curve (AUC). The CNN was able to correctly detect caries in 92.5% of cases when all test images were considered (SE, 89.6; SP, 94.3; AUC, 0.964). If the threshold of caries-related cavitation was chosen, 93.3% of all tooth surfaces were correctly classified (SE, 95.7; SP, 81.5; AUC, 0.955). It can be concluded that it was possible to achieve more than 90% agreement in caries detection using the AI method with standardized, single-tooth photographs. Nevertheless, the current approach needs further improvement.
Current concept, development, and testing applications in production concerning Cyber-Physical Systems (CPS), Industry 4.0 (I40), and Internet of Things (IoT) are mainly addressing fully autonomous systems, fostered by an increase in available technologies regarding distributed decision-making, sensors, and actuators for robotics systems. This is applied also to production logistics settings with a multitude of transport tasks, e.g., between warehousing or material supply stations and production locations within larger production sites as for example in the automotive industry. In most cases, mixed environments where automated systems and humans collaborate (e.g., cobots) are not in the center of analysis and development endeavors although the worker's adoption and acceptance of new technologies are of crucial relevance. From an interdisciplinary research perspective, this constitutes an important research gap, as the future challenges for successful automated systems will rely mainly on human-computer interaction (HCI) in connection with an efficient collaboration between motivated workers, automated robotics, and transportation systems. We develop a HCI efficiency description in production logistics based on an interdisciplinary analysis consisting of three interdependent parts: (i) a production logistics literature review and process study, (ii) a computer science literature review and simulation study for an existing autonomous traffic control algorithm applicable to production logistics Purpose This paper addresses three approaches determined to analyze the crucial role of human interaction in automated environments in production logistics and Industry 4.0 settings. The methods stem from different disciplines as successful automation concepts also have to consider computer science, economics and work science perspectives. Contribution We answer the question of human intuition and its development within a digitalized production logistics setting as well as automated algorithm reaction to human actions from an interdisciplinary perspective. So far, existing research contributions are mainly focusing on technical aspects and automation concepts as solely computer science optimization aspects. However, feasible and sustainable concepts for automated production, e.g., within production transport will only work out if the human factor is included as for a long time to come production environments will be mixed settings of robotics and human workers. Thus, we develop a HCI efficiency description in production logistics for future research and business applications.
The aim of the present study was to investigate the diagnostic performance of a trained convolutional neural network (CNN) for detecting and categorizing fissure sealants from intraoral photographs using the expert standard as reference. An image set consisting of 2352 digital photographs from permanent posterior teeth (461 unsealed tooth surfaces/1891 sealed surfaces) was divided into a training set (n = 1881/364/1517) and a test set (n = 471/97/374). All the images were scored according to the following categories: unsealed molar, intact, sufficient and insufficient sealant. Expert diagnoses served as the reference standard for cyclic training and repeated evaluation of the CNN (ResNeXt-101-32x8d), which was trained by using image augmentation and transfer learning. A statistical analysis was performed, including the calculation of contingency tables and areas under the receiver operating characteristic curve (AUC). The results showed that the CNN accurately detected sealants in 98.7% of all the test images, corresponding to an AUC of 0.996. The diagnostic accuracy and AUC were 89.6% and 0.951, respectively, for intact sealant; 83.2% and 0.888, respectively, for sufficient sealant; 92.4 and 0.942, respectively, for insufficient sealant. On the basis of the documented results, it was concluded that good agreement with the reference standard could be achieved for automatized sealant detection by using artificial intelligence methods. Nevertheless, further research is necessary to improve the model performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.