Hand gestures are a form of nonverbal communication that can be used in several fields such as communication between deaf-mute people, robot control, human–computer interaction (HCI), home automation and medical applications. Research papers based on hand gestures have adopted many different techniques, including those based on instrumented sensor technology and computer vision. In other words, the hand sign can be classified under many headings, such as posture and gesture, as well as dynamic and static, or a hybrid of the two. This paper focuses on a review of the literature on hand gesture techniques and introduces their merits and limitations under different circumstances. In addition, it tabulates the performance of these methods, focusing on computer vision techniques that deal with the similarity and difference points, technique of hand segmentation used, classification algorithms and drawbacks, number and types of gestures, dataset used, detection range (distance) and type of camera used. This paper is a thorough general overview of hand gesture methods with a brief discussion of some possible applications.
Abstract:The objective of this study was to design a non-invasive system for the observation of respiratory rates and detection of apnoea using analysis of real time image sequences captured in any given sleep position and under any light conditions (even in dark environments). A Microsoft Kinect sensor was used to visualize the variations in the thorax and abdomen from the respiratory rhythm. These variations were magnified, analyzed and detected at a distance of 2.5 m from the subject. A modified motion magnification system and frame subtraction technique were used to identify breathing movements by detecting rapid motion areas in the magnified frame sequences. The experimental results on a set of video data from five subjects (3 h for each subject) showed that our monitoring system can accurately measure respiratory rate and therefore detect apnoea in infants and young children. The proposed system is feasible, accurate, safe and low computational complexity, making it an efficient alternative for non-contact home sleep monitoring systems and advancing health care applications.
In search and rescue operations, it is crucial to rapidly identify those people who are alive from those who are not. If this information is known, emergency teams can prioritize their operations to save more lives. However, in some natural disasters the people may be lying on the ground covered with dust, debris, or ashes making them difficult to detect by video analysis that is tuned to human shapes. We present a novel method to estimate the locations of people from aerial video using image and signal processing designed to detect breathing movements. We have shown that this method can successfully detect clearly visible people and people who are fully occluded by debris. First, the aerial videos were stabilized using the key points of adjacent image frames. Next, the stabilized video was decomposed into tile videos and the temporal frequency bands of interest were motion magnified while the other frequencies were suppressed. Image differencing and temporal filtering were performed on each tile video to detect potential breathing signals. Finally, the detected frequencies were remapped to the image frame creating a life signs map that indicates possible human locations. The proposed method was validated with both aerial and ground recorded videos in a controlled environment. Based on the dataset, the results showed good reliability for aerial videos and no errors for ground recorded videos where the average precision measures for aerial videos and ground recorded videos were 0.913 and 1 respectively.
Techniques for noncontact measurement of vital signs using camera imaging technologies have been attracting increasing attention. For noncontact physiological assessments, computer vision-based methods appear to be an advantageous approach that could be robust, hygienic, reliable, safe, cost effective and suitable for long distance and long-term monitoring. In addition, video techniques allow measurements from multiple individuals opportunistically and simultaneously in groups. This paper aims to explore the progress of the technology from controlled clinical scenarios with fixed monitoring installations and controlled lighting, towards uncontrolled environments, crowds and moving sensor platforms. We focus on the diversity of applications and scenarios being studied in this topic. From this review it emerges that automatic multiple regions of interest (ROIs) selection, removal of noise artefacts caused by both illumination variations and motion artefacts, simultaneous multiple person monitoring, long distance detection, multi-camera fusion and accepted publicly available datasets are topics that still require research to enable the technology to mature into many real-world applications.
In the aftermath of a disaster, such as earthquake, flood, or avalanche, ground search for survivors is usually hampered by unstable surfaces and difficult terrain. Drones now play an important role in these situations, allowing rescuers to locate survivors and allocate resources to saving those who can be helped. The aim of this study was to explore the utility of a drone equipped for human life detection with a novel computer vision system. The proposed system uses image sequences captured by a drone camera to remotely detect the cardiopulmonary motion caused by periodic chest movement of survivors. The results of eight human subjects and one mannequin in different poses shows that motion detection on the body surface of the survivors is likely to be useful to detect life signs without any physical contact. The results presented in this study may lead to a new approach to life detection and remote life sensing assessment of survivors.
Falls are the main source of injury for elderly patients with epilepsy and Parkinson's disease. Elderly people who carry battery powered health monitoring systems can move unhindered from one place to another according to their activities, thus improving their quality of life. This paper aims to detect when an elderly individual falls and to provide accurate location of the incident while the individual is moving in indoor environments such as in houses, medical health care centers, and hospitals. Fall detection is accurately determined based on a proposed sensor-based fall detection algorithm, whereas the localization of the elderly person is determined based on an artificial neural network (ANN). In addition, the power consumption of the fall detection system (FDS) is minimized based on a data-driven algorithm. Results show that an elderly fall can be detected with accuracy levels of 100% and 92.5% for line-of-sight (LOS) and non-line-of-sight (NLOS) environments, respectively. In addition, elderly indoor localization error is improved with a mean absolute error of 0.0094 and 0.0454 m for LOS and NLOS, respectively, after the application of the ANN optimization technique. Moreover, the battery life of the FDS is improved relative to conventional implementation due to reduced computational effort. The proposed FDS outperforms existing systems in terms of fall detection accuracy, localization errors, and power consumption.Energies 2018, 11, 2866 2 of 32 to detect emergency conditions and enable caregivers to respond efficiently. A fall is one of the key factors that can lead to injuries and decrease quality of life, at times resulting in the death of elderly persons. People's rate of falling increases with their age. Falls occur frequently in medical health care centers, hospitals, or houses, with approximately 30% of falls causing injury. Falls in hospitals occur in the rooms of the patients (84%) and during transfer from one place to another (19%). Furthermore, the majority of falls occur in areas adjacent to chairs and beds [2]. Most people who experience falls need special care in a nursing home or hospital, thereby restricting their life activities. The hazard issues of fall or slight fall, especially of the elderly, can be aggravated by chronic diseases, such as osteoporosis, delirium, and dementia [3]. The degree of danger from a fall for aging persons is frequently decided by the location of the fall, time of fall detection, duration and time of transfer and rescue services. Therefore, automatic detection of elderly people's falls along with the locations of the incident is important so that medical rescue staff can be dispatched immediately and so that the family of the elderly can be informed about the incident through a specific wireless network or mobile telephone.The development of microelectromechanical technologies allows the integration of different sensors, and a wireless network is commonly used. Wireless sensor networks (WSNs) comprise a number of tiny and small sensor nodes which are deployed ...
BackgroundRemote physiological measurement might be very useful for biomedical diagnostics and monitoring. This study presents an efficient method for remotely measuring heart rate and respiratory rate from video captured by a hovering unmanned aerial vehicle (UVA). The proposed method estimates heart rate and respiratory rate based on the acquired signals obtained from video-photoplethysmography that are synchronous with cardiorespiratory activity.MethodsSince the PPG signal is highly affected by the noise variations (illumination variations, subject’s motions and camera movement), we have used advanced signal processing techniques, including complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) and canonical correlation analysis (CCA) to remove noise under these assumptions.ResultsTo evaluate the performance and effectiveness of the proposed method, a set of experiments were performed on 15 healthy volunteers in a front-facing position involving motion resulting from both the subject and the UAV under different scenarios and different lighting conditions.ConclusionThe experimental results demonstrated that the proposed system with and without the magnification process achieves robust and accurate readings and have significant correlations compared to a standard pulse oximeter and Piezo respiratory belt. Also, the squared correlation coefficient, root mean square error, and mean error rate yielded by the proposed method with and without the magnification process were significantly better than the state-of-the-art methodologies, including independent component analysis (ICA) and principal component analysis (PCA).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
334 Leonard St
Brooklyn, NY 11211
Copyright © 2023 scite Inc. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.