Growing life expectancy and increasing incidence of multiple chronic health conditions are significant societal challenges. Different technologies have been proposed to address these issues, to detect critical events such as stroke or falls, and to monitor automatically human activities for health condition inference and anomalies detection. This paper aims to investigate two types of sensing technologies proposed for assisted living: wearable and radar sensors. First, different feature selection methods are validated and compared in terms of accuracy and computational loads. Then, information fusion is applied to enhance activity classification accuracy combining the two sensors. Improvements in classification accuracy of approximately 12% using feature level fusion is achieved with both Support Vector Machine and K Nearest Neighbor classifiers. Decision-level fusion schemes are also investigated, yielding classification accuracy in the order of 97-98%.
This article presents radar signal processing for sensing in the context of assisted living. This is covered through 3 example applications: human activity recognition for activities of daily living, respiratory disorder and Sleep Stages classification. The common challenge of classification is discussed within a framework of measurements/pre-processing, feature extraction, and classification algorithms for supervised learning. Then, the specific challenges of the 3 applications from a signal processing standpoint are detailed in their specific data processing and ad-hoc classification strategies, focusing on recent trends in the field of activity recognition (multidomain, multi-modal and fusion) and healthcare applications based on vital signs (super-resolution techniques) and commenting on outstanding challenges. To conclude, this paper explores the challenge of the real-time implementation of signal processing/classification algorithms.
This paper presents a framework based on multi-layer bi-LSTM network (bidirectional Long Short-Term Memory) for multimodal sensor fusion to sense and classify daily activities' patterns and high-risk events such as falls. The data collected in this work are continuous activity streams from FMCW radar and three wearable inertial sensors on the wrist, waist, and ankle. Each activity has a variable duration in the data stream so that the transitions between activities can happen at random times within the stream, without resorting to conventional fixed-duration snapshots. The proposed bi-LSTM implements soft feature fusion between wearable sensors and radar data, as well as two robust hard-fusion methods using the confusion matrices of both sensors. A novel hybrid fusion scheme is then proposed to combine soft and hard fusion to push the classification performances to approximately 96% accuracy in identifying continuous activities and fall events. These fusion schemes implemented with the proposed bi-LSTM network are compared with conventional sliding window approach, and all are validated with realistic "leaving one participant out" (L1PO) method (i.e. testing subjects unknown to the classifier). The developed hybrid-fusion approach is capable of stabilizing the classification performance among different participants in terms of reducing accuracy variance of up to 18.1% and increasing minimum, worst-case accuracy up to 16.2%.
Recognition of human movements with radar for ambient activity monitoring is a developed area of research that yet presents outstanding challenges to address. In real environments, activities and movements are performed with seamless motion, with continuous transitions between activities of different duration and a large range of dynamic motions, compared with discrete activities of fixed-time lengths which are typically analysed in the literature. This paper proposes a novel approach based on recurrent LSTM and Bi-LSTM network architectures for continuous activity monitoring and classification. This approach uses radar data in the form of a continuous temporal sequence of micro-Doppler or range-time information, differently from from other conventional approaches based on convolutional networks that interpret the radar data as images. Experimental radar data involving 15 participants and different sequences of 6 actions are used to validate the proposed approach. It is demonstrated that using the Dopplerdomain data together with the Bi-LSTM network and an optimal learning rate can achieve over 90% mean accuracy, whereas range-domain data only achieved approximately 76%. The details of the network architectures, insights in their behaviour as a function of key hyper-parameters such as the learning rate, and a discussion on their performance across are provided in the paper.
Radar-based human motion recognition is crucial for many applications such as surveillance, search and rescue operations, smart homes, and assisted living. Continuous human motion recognition in real-living environment is necessary for practical deployment, i.e. classification of a sequence of activities transitioning one into another, rather than individual activities. In this paper, a novel Dynamic Range-Doppler Trajectory (DRDT) method based on frequency-modulated continuous-wave (FMCW) radar system is proposed to recognize continuous human motions with various conditions emulating real-living environment. This method can separate continuous motions and process them as single events. First, range-Doppler frames consisting of a series of range-Doppler maps are obtained from the backscattered signals. Next, the DRDT is extracted from these frames to monitor human motions in time, range and Doppler domains in real time. Then, a peak search method is applied to locate and separate each human motion from the DRDT map. Finally, range, Doppler, radar crosssection (RCS) and dispersion features are extracted and combined in a multi-domain fusion approach as inputs to a machine learning classifier. This achieves accurate and robust recognition even when in various conditions of distance, view angle, direction and individual diversity. Extensive experiments have been conducted to show its feasibility and superiority by obtaining an average accuracy of 91.9% on continuous classification. Index Terms-Continuous human motion recognition, DRDT method, fusion of multi-domain features, FMCW radar, machine learning I. INTRODUCTION H UMAN motion recognition has attracted great interests for different purposes such as surveillance, search and rescue
Abstract-Significant research exists on the use of wearable sensors in the context of assisted living for activities recognition and fall detection, whereas radar sensors have been studied only recently in this domain. This paper approaches the performance limitation of using individual sensors, especially for classification of similar activities, by implementing information fusion of features extracted from experimental data collected by different sensors, namely a tri-axial accelerometer, a micro-Doppler radar, and a depth camera. Preliminary results confirm that combining information from heterogeneous sensors improves the overall performance of the system. The classification accuracy attained by means of this fusion approach improves by 11.2% compared to radar-only use, and by 16.9% compared to the accelerometer. Furthermore, adding features extracted from a RGB-D Kinect sensor, the overall classification accuracy increases up to 91.3%.
Radar for healthcare: recognising human activities and monitoring vital signsRadar is typically associated to defence and military applications, such as detection and monitoring of the traffic of ships and aircraft in certain areas. Many of us must have seen for example the antennas near the runways of airports while travelling, rotating to scan the surrounding space and discover airplanes approaching or leaving. However, in recent years radar has started to gain significant interest in many fields beyond defence or air traffic control, opening indeed "new frontiers in radar", as the title of our special collection of articles mentions. Emerging applications of radar sensing include, but are not limited to, automotive radar (radar on vehicles to help them navigate around obstacles and other vehicles), human gesture identification (radar to identify the complex gestures performed by human users to interact with smart objects without tapping screens or pushing buttons), and healthcare domain (radar to estimate vital signs such as respiration and heartbeat, and to monitor our level of activities at home). So, radar is ceasing to be only of interest to a niche community of researchers and users in the defence sector, and becoming a relevant subject for a wide audience of students in electronic engineering and computer science, researchers and academics, entrepreneurs and policy makers. Radar sensing intersects and relates to many skills and disciplines, from manufacturing of chips and components operating at the desired frequency to electromagnetic wave propagation, from manufacturing and integration on printed circuit boards (PCBs) to power management, from radar-specific signal processing to machine learning algorithms applied to radar data. For this reason, it is very likely that engineering professionals will have to deal with some aspects of radar sensing as part of the design and development of a larger system, be that a smart vehicle, a mobile phone, a tablet, or a suite of sensors for new smart homes. In this article, we decide to focus on the healthcare applications of radar systems and radar sensing, which perhaps are among the most innovative and somewhat most different from the traditional, defence-oriented applications that are commonly associated to radar. New healthcare needs and provisionThe adoption of radar sensing and other technologies in the domain of healthcare is related to the new needs in care and welfare provision arising from the rapidly aging population worldwide. Estimates from the World Health Organisation and United Nation analysis report that 30% of the world population will be over 65 by 2050, and in the UK the Office for National Statistics expects the proportion of people over 85 years to double over the next 20 years. With aging, the incidence of multiple chronic health conditions (or "multimorbidity") and the likelihood of critical, life-threatening events such as strokes or falls increase. Statistics from the UK charity Age UK show for example that "falls and fractures in people ...
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.