The fracture of the bone is common issue in human body occurs when the pressure is applied on bone or minor accident and also due to osteoporosis and bone cancer. Therefore the accurate diagnosis of bone fracture is an important aspects in medical field. In this work X-ray/CT images are used for the bone fracture analysis. The main aim of the this project is to develop an image processing based efficient system for a quick and accurate classification of bone fractures based on the information gained from the x-ray / CT images of the skull. X- ray/CT scan images of the fractured bone are collected from the hospital and processing techniques like pre-processing method, segmentation method, edge detection and feature extraction methods are adopted. The images are tested out by considering the image slice of single slice and also grouping the slices of the patients. The patients CT scan/X-ray image was classified if bone is fractured then if two following slices were categorized with a probability fracture higher than 0.99. The results of the patient x-ray images show that the model accuracy of the maxillofacial fractures is contains 80%. Even the radiologist’s work is not replaced by the MFDS model system, it is useful only for the providing valuable assistive support, it reduces the human error in the medical field, preventing the harm for the patients by minimizing the diagnostic delays, and reducing the incongruous burden of hospitalization.
As a key research subject in the fields of health and human-machine interaction, human activity recognition (HAR) has emerged as a major research focus over the past few decades. Many artificial intelligence-based models are being created for activity recognition. However, these algorithms are failing to extract spatial and temporal properties, resulting in poor performance on real-world long-term HAR. A drawback in the literature is that there are only a small number of publicly available datasets for physical activity recognition that contain a small number of activities, owing to the scarcity of publicly available datasets. In this paper, a hybrid model for activity recognition that incorporates both convolutional neural networks (CNN) are developed. The CNN network is used for extracting spatial characteristics, while the LSTM network is used for learning time-related information. Using a variety of traditional and deep machine learning models, an extensive ablation investigation is carried out in order to find the best possible HAR solution. The CNN approach can achieve a precision of 90.89%, indicating that the model is suitable for HAR applications.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.