The aging population, prevalence of chronic diseases, and outbreaks of infectious diseases are some of the major challenges of our present-day society. To address these unmet healthcare needs, especially for the early prediction and treatment of major diseases, health informatics, which deals with the acquisition, transmission, processing, storage, retrieval, and use of health information, has emerged as an active area of interdisciplinary research. In particular, acquisition of health-related information by unobtrusive sensing and wearable technologies is considered as a cornerstone in health informatics. Sensors can be weaved or integrated into clothing, accessories, and the living environment, such that health information can be acquired seamlessly and pervasively in daily living. Sensors can even be designed as stick-on electronic tattoos or directly printed onto human skin to enable long-term health monitoring. This paper aims to provide an overview of four emerging unobtrusive and wearable technologies, which are essential to the realization of pervasive health information acquisition, including: (1) unobtrusive sensing methods, (2) smart textile technology, (3) flexible-stretchable-printable electronics, and (4) sensor fusion, and then to identify some future directions of research.
After decades of evolution, measuring instruments for quantitative gait analysis have become an important clinical tool for assessing pathologies manifested by gait abnormalities. However, such instruments tend to be expensive and require expert operation and maintenance besides their high cost, thus limiting them to only a small number of specialized centers. Consequently, gait analysis in most clinics today still relies on observation-based assessment. Recent advances in wearable sensors, especially inertial body sensors, have opened up a promising future for gait analysis. Not only can these sensors be more easily adopted in clinical diagnosis and treatment procedures than their current counterparts, but they can also monitor gait continuously outside clinics - hence providing seamless patient analysis from clinics to free-living environments. The purpose of this paper is to provide a systematic review of current techniques for quantitative gait analysis and to propose key metrics for evaluating both existing and emerging methods for qualifying the gait features extracted from wearable sensors. It aims to highlight key advances in this rapidly evolving research field and outline potential future directions for both research and clinical applications.
The increasing popularity of wearable devices in recent years means that a diverse range of physiological and functional data can now be captured continuously for applications in sports, wellbeing, and healthcare. This wealth of information requires efficient methods of classification and analysis where deep learning is a promising technique for large-scale data analytics. While deep learning has been successful in implementations that utilize high-performance computing platforms, its use on low-power wearable devices is limited by resource constraints. In this paper, we propose a deep learning methodology, which combines features learned from inertial sensor data together with complementary information from a set of shallow features to enable accurate and real-time activity classification. The design of this combined method aims to overcome some of the limitations present in a typical deep learning framework where on-node computation is required. To optimize the proposed method for real-time on-node computation, spectral domain preprocessing is used before the data are passed onto the deep learning framework. The classification accuracy of our proposed deep learning approach is evaluated against state-of-the-art methods using both laboratory and real world activity datasets. Our results show the validity of the approach on different human activity datasets, outperforming other methods, including the two methods used within our combined pipeline. We also demonstrate that the computation times for the proposed method are consistent with the constraints of real-time on-node processing on smartphones and a wearable sensor platform.
Activities of daily living are important for assessing changes in physical and behavioral profiles of the general population over time, particularly for the elderly and patients with chronic diseases. Although accelerometers have been used widely in wearable devices for activity classification, the positioning of the sensors and the selection of relevant features for different activity groups still pose significant research challenges. This paper investigates wearable sensor placement at different body positions and aims to provide a systematic framework that can answer the following questions: 1) What is the ideal sensor location for a given group of activities? and 2) Of the different time-frequency features that can be extracted from wearable accelerometers, which ones are the most relevant for discriminating different activity types?
Abstract-Human Activity Recognition provides valuable contextual information for wellbeing, healthcare, and sport applications. Over the past decades, many machine learning approaches have been proposed to identify activities from inertial sensor data for specific applications. Most methods, however, are designed for offline processing rather than processing on the sensor node. In this paper, a human activity recognition technique based on a deep learning methodology is designed to enable accurate and real-time classification for low-power wearable devices. To obtain invariance against changes in sensor orientation, sensor placement, and in sensor acquisition rates, we design a feature generation process that is applied to the spectral domain of the inertial data. Specifically, the proposed method uses sums of temporal convolutions of the transformed input. Accuracy of the proposed approach is evaluated against the current state-of-the-art methods using both laboratory and real world activity datasets. A systematic analysis of the feature generation parameters and a comparison of activity recognition computation times on mobile devices and sensor nodes are also presented.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.