“…Previously, the researchers in [29,32,[37][38][39][40][41][42][43][44][45] used various feature extraction and/or selection methods before feeding the obtained data to a classification algorithm for recognizing diverse human activities. ML models rely on handcrafted features in the HAR domain, and such features require expert domain knowledge.…”
The self-regulated recognition of human activities from time-series smartphone sensor data is a growing research area in smart and intelligent health care. Deep learning (DL) approaches have exhibited improvements over traditional machine learning (ML) models in various domains, including human activity recognition (HAR). Several issues are involved with traditional ML approaches; these include handcrafted feature extraction, which is a tedious and complex task involving expert domain knowledge, and the use of a separate dimensionality reduction module to overcome overfitting problems and hence provide model generalization. In this article, we propose a DL-based approach for activity recognition with smartphone sensor data, i.e., accelerometer and gyroscope data. Convolutional neural networks (CNNs), autoencoders (AEs), and long short-term memory (LSTM) possess complementary modeling capabilities, as CNNs are good at automatic feature extraction, AEs are used for dimensionality reduction and LSTMs are adept at temporal modeling. In this study, we take advantage of the complementarity of CNNs, AEs, and LSTMs by combining them into a unified architecture. We explore the proposed architecture, namely, "ConvAE-LSTM", on four different standard public datasets (WISDM, UCI, PAMAP2, and OPPORTUNITY). The experimental results indicate that our novel approach is practical and provides relative smartphone-based HAR solution performance improvements in terms of computational time, accuracy, F1-score, precision, and recall over existing state-of-the-art methods.
“…Previously, the researchers in [29,32,[37][38][39][40][41][42][43][44][45] used various feature extraction and/or selection methods before feeding the obtained data to a classification algorithm for recognizing diverse human activities. ML models rely on handcrafted features in the HAR domain, and such features require expert domain knowledge.…”
The self-regulated recognition of human activities from time-series smartphone sensor data is a growing research area in smart and intelligent health care. Deep learning (DL) approaches have exhibited improvements over traditional machine learning (ML) models in various domains, including human activity recognition (HAR). Several issues are involved with traditional ML approaches; these include handcrafted feature extraction, which is a tedious and complex task involving expert domain knowledge, and the use of a separate dimensionality reduction module to overcome overfitting problems and hence provide model generalization. In this article, we propose a DL-based approach for activity recognition with smartphone sensor data, i.e., accelerometer and gyroscope data. Convolutional neural networks (CNNs), autoencoders (AEs), and long short-term memory (LSTM) possess complementary modeling capabilities, as CNNs are good at automatic feature extraction, AEs are used for dimensionality reduction and LSTMs are adept at temporal modeling. In this study, we take advantage of the complementarity of CNNs, AEs, and LSTMs by combining them into a unified architecture. We explore the proposed architecture, namely, "ConvAE-LSTM", on four different standard public datasets (WISDM, UCI, PAMAP2, and OPPORTUNITY). The experimental results indicate that our novel approach is practical and provides relative smartphone-based HAR solution performance improvements in terms of computational time, accuracy, F1-score, precision, and recall over existing state-of-the-art methods.
“…Compressive sensing has been investigated specifically for mobile activity monitoring by researchers such as Akimura et al [11], who reduce power consumption by 16% while maintaining a recognition accuracy of over 70% for scripted the motion-based activities stay, walk, jog, skip, climb up stairs, and descend down stairs. Similarly, Jansi and Amutha maintain f-score, specificity, and precision as well as accuracy for recognition of eight movement-based scripted activities using compressive sensing with a sparse-based classifier [12]. Hui et al found that they could directly use the compressed information to recognize six activities with an accuracy of 89.86% when combining compressive sensing with strategic placement of the mobile device on the body, and Braojos et al [19] quantify the precise relationship between wearable transmission volume and activity recognition sensitivity.…”
Section: Related Workmentioning
confidence: 91%
“…Energy consumption is a known obstacle to wearable computing in general and to activity monitoring in particular [11][12][13][14][15]. For complex activities, however, recognition and monitoring may require an even greater energy footprint.…”
Continuous monitoring of complex activities is valuable for understanding human behavior and providing activity-aware services. At the same time, recognizing these activities requires both movement and location information that can quickly drain batteries on wearable devices. In this paper, we introduce Change Point-based Activity Monitoring (CPAM), an energy-efficient strategy for recognizing and monitoring a range of simple and complex activities in real time. CPAM employs unsupervised change point detection to detect likely activity transition times. By adapting the sampling rate at each change point, CPAM reduces energy consumption by 74.64% while retaining the activity recognition performance of continuous sampling. We validate our approach using smartwatch data collected and labeled by 66 subjects. Results indicate that change point detection techniques can be effective for reducing the energy footprint of sensor-based mobile applications and that automated activity labels can be used to estimate sensor values between sampling periods.
“…Energy consumption can be improved by reducing the number of sensors [61], reducing the amount of data on the sensor node [8,32], reducing the sampling rate [14,30,61,82,111,124,125], using a dynamically adjusted sampling rate [124] and Kinetic Energy Harvesting (KEH) supporting devices, as well as adaptive selection of sensors in real-time data acquisition [61] in the Data collection and filtering stage of HAR. The impact of some of these mechanisms is verified in practice and listed in Table 6.…”
Section: The Optimization Of Energy Consumption and Latency In Harmentioning
Human activity recognition (HAR) is a classification process that is used for recognizing human motions. A comprehensive review of currently considered approaches in each stage of HAR, as well as the influence of each HAR stage on energy consumption and latency is presented in this paper. It highlights various methods for the optimization of energy consumption and latency in each stage of HAR that has been used in literature and was analyzed in order to provide direction for the implementation of HAR in health and wellbeing applications. This paper analyses if and how each stage of the HAR process affects energy consumption and latency. It shows that data collection and filtering and data segmentation and classification stand out as key stages in achieving a balance between energy consumption and latency. Since latency is only critical for real-time HAR applications, the energy consumption of sensors and devices stands out as a key challenge for HAR implementation in health and wellbeing applications. Most of the approaches in overcoming challenges related to HAR implementation take place in the data collection, filtering and classification stages, while the data segmentation stage needs further exploration. Finally, this paper recommends a balance between energy consumption and latency for HAR in health and wellbeing applications, which takes into account the context and health of the target population.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.