In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for activity data collection. We develop a Long Short-Term Memory (LSTM) network framework to support training of a deep learning model on human activity data, which is acquired in both real-world and controlled environments. From the experiment results, we identify that activity data with sampling rate as low as 10 Hz from four sensors at both sides of wrists, right ankle, and waist is sufficient in recognizing Activities of Daily Living (ADLs) including eating and driving activity. We adopt a two-level ensemble model to combine class-probabilities of multiple sensor modalities, and demonstrate that a classifier-level sensor fusion technique can improve the classification performance. By analyzing the accuracy of each sensor on different types of activity, we elaborate custom weights for multimodal sensor fusion that reflect the characteristic of individual activities.
To understand the multilateral characteristics of human behavior and physiological markers related to physical, emotional, and environmental states, extensive lifelog data collection in a real‐world environment is essential. Here, we propose a data collection method using multimodal mobile sensing and present a long‐term dataset from 22 subjects and 616 days of experimental sessions. The dataset contains over 10 000 hours of data, including physiological, data such as photoplethysmography, electrodermal activity, and skin temperature in addition to the multivariate behavioral data. Furthermore, it consists of 10 372 user labels with emotional states and 590 days of sleep quality data. To demonstrate feasibility, human activity recognition was applied on the sensor data using a convolutional neural network‐based deep learning model with 92.78% recognition accuracy. From the activity recognition result, we extracted the daily behavior pattern and discovered five representative models by applying spectral clustering. This demonstrates that the dataset contributed toward understanding human behavior using multimodal data accumulated throughout daily lives under natural conditions.
Speech emotion recognition (SER) is a natural method of recognizing individual emotions in everyday life. To distribute SER models to real-world applications, some key challenges must be overcome, such as the lack of datasets tagged with emotion labels and the weak generalization of the SER model for an unseen target domain. This study proposes a multi-path and group-loss-based network (MPGLN) for SER to support multi-domain adaptation. The proposed model includes a bidirectional long short-term memory-based temporal feature generator and a transferred feature extractor from the pre-trained VGG-like audio classification model (VGGish), and it learns simultaneously based on multiple losses according to the association of emotion labels in the discrete and dimensional models. For the evaluation of the MPGLN SER as applied to multi-cultural domain datasets, the Korean Emotional Speech Database (KESD), including KESDy18 and KESDy19, is constructed, and the English-speaking Interactive Emotional Dyadic Motion Capture database (IEMOCAP) is used. The evaluation of multi-domain adaptation and domain generalization showed 3.7% and 3.5% improvements, respectively, of the F1 score when comparing the performance of MPGLN SER with a baseline SER model that uses a temporal feature generator. We show that the MPGLN SER efficiently supports multi-domain adaptation and reinforces model generalization.
Sleep is one of the most important factors in maintaining both physical and mental health. There are many causes of sleep problems, it is generally necessary to maintain a healthy lifestyle to avoid them. In the medical field, information related to sleep problems including lifestyle information is obtained through interviews, but this approach is limited because it is dependent on the patient's memory. Thus, there are many studies adopting ecological momentary assessments (EMAs) to collect patient's lifestyles. Some of them also use smart devices to collect data effectively. However, these studies focused on specific factors such as smoking, exercising so that they have limits to reflect complex narrative of lifestyle patterns. Therefore, we proposed indicators consist of EMAs data for assessing everyday sleep quality and these indicators contain the complex lifestyle contexts in a quantitative manner. First, we collected real-life data using a smartphone through a 4-week data collection experiment. Second, we develop a method of generating daily indexes reflecting geospatial and social habits, social condition, activity level, and emotional condition using self-report data. Third, we evaluated daily indexes whether could use to supplement indicators comprising features using EMAs from conventional sleep questionnaires. The goal of analysis consists of five metrics of sleep quality that explain perceived sleep quality. The result of analysis indicates that features using both daily indexes and sleep questionnaires lead to better prediction of sleep quality. Additionally, it also shows the potential to generate indicators identifying complex human behaviors with the help of mobile devices and EMAs. Further research on user-friendly data acquisition methods and more diverse lifestyle information should be useful to support behavior decisions for better sleep in wellbeing services and in specialized medical fields.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.