Recently, the dental X-Ray images have been used in different applications, particularly in the forensic field. The researchers focuses on the separation of the underlying teeth individually to obtain their features. These features can be utilized as a key solution for the identifications. In this paper, an edge detection of the involved teeth is proposed using a three stages MATLAB algorithm based on different methods such as, CLAHE, Canny, Otsu's, and 8-Connectivity. In addition, the proposed algorithm extracts the features of the investigated teeth as an exported file. These features are Standard Deviation (STD), Euler number and Area which are extracted from the bite-wing images. The stages of the proposed algorithm are image segmentation, classification and features extraction. It is important to note that the missing teeth has been considered in case of appearance. The missing teeth are assumed to be a separated objects. This is to overtake the problem of missing teeth after registering the original ones in the stored database used for identifications. The obtained results show the clear outperformance of the proposed algorithm in terms of edge detection and features' extraction. The missing teeth in an image are tested and the achieved results presents the detection and features of such teeth dramatically. The proposed system is implemented and tested in the MATLAB software environment using a personal computer of a Core(TM) i7 processor and 6 GB RAM over a 64-Windows 10 operating system.
Many devices, users, and applications stream an irregular amount of varied data every second. This rapid generation of data continues at an enormous rate, constructing the big data that increase the need for solutions, despite resource constraints, to analyze and manipulate data. Current methods allocate cloud resources according to the characteristics of the data. Resource allocation requires a comprehensive view of the workload requirements. However, the data characteristics in big data streams are uncertain due to the random nature of data generation. Choosing and allocating the right resources to this stream is challenging. With the variety of big data streams, the stochastic nature of the stream led to unpredictable requirements and specifications. The critical issue is forecasting the workload to avoid the over-provisioning and under-provisioning of resources. Such forecasting needs an adequate dataset to describe the history logs of the incoming workload. A fast release for such a dataset provides a high chance of deploying forecasting at the right time. This paper addresses this issue with a novel strategy named LSDStrategy that analyzes the received multimedia stream based on its binary content using machine learning techniques with artificial and real datasets. LSDStrategy utilizes an evaluating voting technique to select the optimum classifier to trade off accuracy and prediction time as metrics. Multi-classifiers that have been built and tested include Decision Tree (DT), K-Nearest Neighbor (K-NN), and Random Forest (RF) over multi-content-based features. Experiments evaluated the performance of the adopted models and the selected features. According to experimental analysis, the DT approach provides a consistent performance for both artificial and realworld datasets for 85% and 81.3%, respectively. We deploy and evaluate the LSDStrategy efficiency on a regular specification server through a set of experiments using a synthetic stream. The experiments prove the LSDStrategy agility and adaptivity in identifying the multimedia-based workload type utilizing small chunks of load.
Over the last two years, most scientists have been researching the solution to the pandemic coronavirus disease 2019 . So, the effective inspection and the rapid diagnosis of COVID-19 provide a mitigation ability to the burden on healthcare systems. These research works focus on detecting and knowing the history of infection in terms of time and developed symptoms. In infections detection, artificial intelligence (AI) technologies increase the accuracy and efficiency of the adopted detection methods. These methods will aid the medical staff in classifying patients, essentially when there is a healthcare resources shortage. This paper proposed machine learning-based models for detecting the time of COVID-19 infection in weeks using the laboratory factors of detected antibodies immunoglobulins G and immunoglobulins M (IgG-IgM). This test is common and helpful in diagnosing the suspected patients who held a negative result for the reverse transcription-polymerase chain reaction (RT-PCR) test. The proposed models consider two machine learning models adopting root mean square error (RMSE) and mean absolute error (MAE) factors. The results show acceptable efficiency of performance that ranges from 80% to 100% for pointing the patient in any week of infection, to reduce the likelihood of transmitting the infection from patients who have developed symptoms but with false-negative RT-PCR test.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.