The necessity of caring for elderly people is increasing. Great efforts are being made to enable the elderly population to remain independent for as long as possible. Technologies are being developed to monitor the daily activities of a person to detect their state. Approaches that recognize activities from simple environment sensors have been shown to perform well. It is also important to know the habits of a resident to distinguish between common and uncommon behavior. In this paper, we propose a novel approach to discover a person’s common daily routines. The approach consists of sequence comparison and a clustering method to obtain partitions of daily routines. Such partitions are the basis to detect unusual sequences of activities in a person’s day. Two types of partitions are examined. The first partition type is based on daily activity vectors, and the second type is based on sensor data. We show that daily activity vectors are needed to obtain reasonable results. We also show that partitions obtained with generalized Hamming distance for sequence comparison are better than partitions obtained with the Levenshtein distance. Experiments are performed with two publicly available datasets.
Ambient assisted living in smart home environments is becoming an important goal in an aging society with challenges in elderly care. A key component in such environments is the accurate recognition of activities of daily living from various sensor data. Recent research directions explored several classification methods, including hidden Markov models. This research presents a hidden Markov model-based system for activity recognition, and extends it with a second-order Markov chain model of activity sequences to achieve long-term dependency in the model. We also introduce an activity transition cost to counteract the tendency of hidden Markov models to make a large number of transitions. The proposed models are used for activity recognition, with their scores being combined using heuristically determined weights for optimal performance. We also present a modified Viterbi algorithm, which incorporates both models and the activity transition cost. We used a dataset from the CASAS project to test and evaluate the proposed models. A comparison of the results shows the potential of introducing long term dependencies and the managing the number of activity transitions. We show results regarding the modeling ability to predict activity sequences, a comparison of predicted and actual activity transitions, and final recognition accuracy results. The results show an increase of total activity recognition accuracy from 93.9 % to 94.52 % on individual activities, and from 68.89 % to 70.95 % over the combination of all concurrent activities. The results also show a reduction of predicted activity transitions from 741 to 236, whereas the number of actual activity transitions in the evaluation set is 141. INDEX TERMS Activities of daily living, hidden Markov models, Markov chain, pattern recognition, Viterbi algorithm.
Machine translation has already become part of our everyday life. This chapter gives an overview of machine translation approaches. Statistical machine translation was a dominant approach over the past 20 years. It brought many cases of practical use. It is described in more detail in this chapter. Statistical machine translation is not equally successful for all language pairs. Highly inflectional languages are hard to process, especially as target languages. As statistical machine translation has almost reached the limits of its capacity, neural machine translation is becoming the technology of the future. This chapter also describes the evaluation of machine translation quality. It covers manual and automatic evaluations. Traditional and recently proposed metrics for automatic machine translation evaluation are described. Human translation still provides the best translation quality, but it is, in general, time-consuming and expensive. Integration of human and machine translation is a promising workflow for the future. Machine translation will not replace human translation, but it can serve as a tool to increase productivity in the translation process.
This paper presents a method of binocular visual stimulation for brain-computer interfaces (BCIs) based on steady-state visual evoked potentials (SSVEPs) using phase-coded symbols. The proposed method's emphasis is on a binocular phase-coded visual stimulus, which is based on the phase differences between the left-and right-eye stimuli, and a symbol detection and recognition procedure based on SSVEP response of the left and right occipital lobes of the user's scalp, where the SSVEP response is obtained as electroencephalography (EEG) signaling. The symbols are coded as phase differences and maintain the same frequency of the sine wave-modulated light provided to the user's left and right eyes as a binocular visual stimulation. Based on this method, a basic system setup is presented to explore the possibilities of binocular phase-coded visual stimuli for virtual or augmented reality applications, where the binocular visual stimulation was achieved by the specially designed head-mounted displays. Multiple visually coded targets are realized as eight different phase-coded binocular symbols and further evaluated as a random sequence of single targets, thus representing the situations in virtual or augmented reality, where multiple visually coded targets are present but not visualized to the user simultaneously within the same field of view. The offline results obtained from ten healthy subjects revealed that an average symbol recognition accuracy of 90.63% and an information transfer rate (ITR) of 70.55 bits/min were achieved for a symbol stimulation time of 2 s. The results of this paper demonstrate the feasibility of using binocular visual stimuli for SSVEP-based BCIs, where reasonable ITR is achieved using single-frequency binocular phase-coded symbols. The proposed method indicates the possibility of combining it with 3D wearable visualization technologies, such as binocular head-mounted displays (HMDs), in order to improve the intuitiveness of the interaction with more immersive user experience using BCI modalities.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.