Ubiquitous Life Care (u-Life care) is receiving attention because it provides high quality and low cost care services. To provide spontaneous and robust healthcare services, knowledge of a patient’s real-time daily life activities is required. Context information with real-time daily life activities can help to provide better services and to improve healthcare delivery. The performance and accuracy of existing life care systems is not reliable, even with a limited number of services. This paper presents a Human Activity Recognition Engine (HARE) that monitors human health as well as activities using heterogeneous sensor technology and processes these activities intelligently on a Cloud platform for providing improved care at low cost. We focus on activity recognition using video-based, wearable sensor-based, and location-based activity recognition engines and then use intelligent processing to analyze the context of the activities performed. The experimental results of all the components showed good accuracy against existing techniques. The system is deployed on Cloud for Alzheimer’s disease patients (as a case study) with four activity recognition engines to identify low level activity from the raw data captured by sensors. These are then manipulated using ontology to infer higher level activities and make decisions about a patient’s activity using patient profile information and customized rules.
Over the last decade, human facial expressions recognition (FER) has emerged as an important research area. Several factors make FER a challenging research problem. These include varying light conditions in training and test images; need for automatic and accurate face detection before feature extraction; and high similarity among different expressions that makes it difficult to distinguish these expressions with a high accuracy. This work implements a hierarchical linear discriminant analysis-based facial expressions recognition (HL-FER) system to tackle these problems. Unlike the previous systems, the HL-FER uses a pre-processing step to eliminate light effects, incorporates a new automatic face detection scheme, employs methods to extract both global and local features, and utilizes a HL-FER to overcome the problem of high similarity among different expressions. Unlike most of the previous works that were evaluated using a single dataset, the performance of the HL-FER is assessed using three publicly available datasets under three different experimental settings: n-fold cross validation based on subjects for each dataset separately; n-fold cross validation rule based on datasets; and, finally, a last set of experiments to assess the effectiveness of each module of the HL-FER separately. Weighted average recognition accuracy of 98.7% across three different datasets, using three classifiers, indicates the success of employing the HL-FER for human FER.
Automatic bone segmentation of computed tomography (CT) images is an important step in image-guided surgery that requires both high accuracy and minimal user interaction. Previous attempts include global thresholding, region growing, region competition, watershed segmentation, and parametric active contour (AC) approaches, but none claim fully satisfactory performance. Recently, geometric or level-set-based AC models have been developed and appear to have characteristics suitable for automatic bone segmentation such as initialization insensitivity and topology adaptability. In this study, we have tested the feasibility of five level-set-based AC approaches for automatic CT bone segmentation with both synthetic and real CT images: namely, the geometric AC, geodesic AC, gradient vector flow fast geometric AC, Chan-Vese (CV) AC, and our proposed density distance augmented CV AC (Aug. CV AC). Qualitative and quantitative evaluations have been made in comparison with the segmentation results from standard commercial software and a medical expert. The first three models showed their robustness to various image contrasts, but their performances decreased much when noise level increased. On the contrary, the CV AC's performance was more robust to noise, yet dependent on image contrast. On the other hand, the Aug. CV AC demonstrated its robustness to both noise and contrast levels and yielded improved performances on a set of real CT data compared with the commercial software, proving its suitability for automatic bone segmentation from CT images.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.