Background: Ambiguities and anomalies in the Activity of Daily Living (ADL) patterns indicate deviations from Wellness. The monitoring of lifestyles could facilitate remote physicians or caregivers to give insight into symptoms of the disease and provide health improvement advice to residents; Objective: This research work aims to apply lifestyle monitoring in an ambient assisted living (AAL) system by diagnosing conduct and distinguishing variation from the norm with the slightest conceivable fake alert. In pursuing this aim, the main objective is to fill the knowledge gap of two contextual observations (i.e., day and time) in the frequent behavior modeling for an individual in AAL. Each sensing category has its advantages and restrictions. Only a single type of sensing unit may not manage composite states in practice and lose the activity of daily living. To boost the efficiency of the system, we offer an exceptional sensor data fusion technique through different sensing modalities; Methods: As behaviors may also change according to other contextual observations, including seasonal, weather (or temperature), and social interaction, we propose the design of a novel activity learning model by adding behavioral observations, which we name as the Wellness indices analysis model; Results: The ground-truth data are collected from four elderly houses, including daily activities, with a sample size of three hundred days plus sensor activation. The investigation results validate the success of our method. The new feature set from sensor data fusion enhances the system accuracy to (98.17% ± 0.95) from (80.81% ± 0.68). The performance evaluation parameters of the proposed model for ADL recognition are recorded for the 14 selected activities. These parameters are Sensitivity (0.9852), Specificity (0.9988), Accuracy (0.9974), F1 score (0.9851), False Negative Rate (0.0130).
In our study Medline and Google Scholar were the key search engines to find literature using keywords like epidemiology, pathogenesis, clinical features, management and complications of ocular rosacea.
Oral mucosal lesions (OML) and oral potentially malignant disorders (OPMDs) have been identified as having the potential to transform into oral squamous cell carcinoma (OSCC). This research focuses on the human-in-the-loop-system named Healthcare Professionals in the Loop (HPIL) to support diagnosis through an advanced machine learning procedure. HPIL is a novel system approach based on the textural pattern of OML and OPMDs (anomalous regions) to differentiate them from standard regions of the oral cavity by using autofluorescence imaging. An innovative method based on pre-processing, e.g., the Deriche–Canny edge detector and circular Hough transform (CHT); a post-processing textural analysis approach using the gray-level co-occurrence matrix (GLCM); and a feature selection algorithm (linear discriminant analysis (LDA)), followed by k-nearest neighbor (KNN) to classify OPMDs and the standard region, is proposed in this paper. The accuracy, sensitivity, and specificity in differentiating between standard and anomalous regions of the oral cavity are 83%, 85%, and 84%, respectively. The performance evaluation was plotted through the receiver operating characteristics of periodontist diagnosis with the HPIL system and without the system. This method of classifying OML and OPMD areas may help the dental specialist to identify anomalous regions for performing their biopsies more efficiently to predict the histological diagnosis of epithelial dysplasia.
Objective: Classification of sleep-wake states using multichannel electroencephalography (EEG) data that reliably work for neonates. Methods: A deep multilayer perceptron (MLP) neural network is developed to classify sleep-wake states using multichannel bipolar EEG signals, which takes an input vector of size 108 containing the joint features of 9 channels. The network avoids any post-processing step in order to work as a full-fledged real-time application. For training and testing the model, EEG recordings of 3525 30second segments from 19 neonates (postmenstrual age of 37 ± 05 weeks) are used. Results: For sleep-wake classification, mean Cohen's kappa between the network estimate and the ground truth annotation by human experts is 0.62. The maximum mean accuracy can reach up to 83% which, to date, is the highest accuracy for sleep-wake classification.
Human Action Recognition (HAR) is the classification of an action performed by a human. The goal of this study was to recognize human actions in action video sequences. We present a novel feature descriptor for HAR that involves multiple features and combining them using fusion technique. The major focus of the feature descriptor is to exploits the action dissimilarities. The key contribution of the proposed approach is to built robust features descriptor that can work for underlying video sequences and various classification models. To achieve the objective of the proposed work, HAR has been performed in the following manner. First, moving object detection and segmentation are performed from the background. The features are calculated using the histogram of oriented gradient (HOG) from a segmented moving object. To reduce the feature descriptor size, we take an averaging of the HOG features across non-overlapping video frames. For the frequency domain information we have calculated regional features from the Fourier hog. Moreover, we have also included the velocity and displacement of moving object. Finally, we use fusion technique to combine these features in the proposed work. After a feature descriptor is prepared, it is provided to the classifier. Here, we have used well-known classifiers such as artificial neural networks (ANNs), support vector machine (SVM), multiple kernel learning (MKL), Meta-cognitive Neural Network (McNN), and the late fusion methods. The main objective of the proposed approach is to prepare a robust feature descriptor and to show the diversity of our feature descriptor. Though we are using five different classifiers, our feature descriptor performs relatively well across the various classifiers. The proposed approach is performed and compared with the state-of-the-art methods for action recognition on two publicly available benchmark datasets (KTH and Weizmann) and for cross-validation on the UCF11 dataset, HMDB51 dataset, and UCF101 dataset. Results of the control experiments, such as a change in the SVM classifier and the effects of the second hidden layer in ANN, are also reported. The results demonstrate that the proposed method performs reasonably compared with the majority of existing state-of-the-art methods, including the convolutional neural network-based feature extractors.
In recent times, with the advancement of digital imaging, automatic facial recognition has been intensively studied for adults, while less for neonates. Due to the miniature facial structure and facial attributes, newborn facial recognition remains a challenging area. In this paper, an automatic videobased Neonatal Face Attributes Recognition (NFAR) approach in a hierarchical framework is proposed by coalescing the intensity-based method, pose estimation, and novel dedicated neonatal Face Feature Selection (FFS) algorithm. The intensity-based method is used for face detection, followed by the facial pose estimation algorithm and FFS are dedicated to neonatal pose and face feature recognition, respectively. In this study, video-data of 19 neonates' were collected from the Children's Hospital affiliated to Fudan University, Shanghai, to evaluate the proposed NFAR approach. The results show promising performance to detect the neonatal face, pose estimation (−45 • , 45 •), and facial features (nose, mouth, and eyes) recognition. The NFAR approach exhibits a sensitivity, accuracy, and specificity of 98.7%, 98.5%, and, 95.7% respectively, for the newborn babies at the frontal (0 •) facial region. The neonatal face and its attributes recognition can be expected to detect neonate's medical abnormalities unobtrusively by examining the variation in newborn facial texture pattern. INDEX TERMS Neonatal face detection, facial feature selection (FFS), neonatal pose estimation, face neonatal attributes recognition (NFAR), video electroencephalogram (VEEG).
published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rightsCopyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights.• Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal.If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the "Taverne" license above, please follow below link for the End User
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.