Physiological signals, including heart rate (HR), heart rate variability (HRV), and respiratory frequency (RF) are important indicators of our health, which are usually measured in clinical examinations. Traditional physiological signal measurement often involves contact sensors, which may be inconvenient or cause discomfort in long-term monitoring sessions. Recently, there were studies exploring remote HR measurement from facial videos, and several methods have been proposed. However, previous methods cannot be fairly compared, since they mostly used private, self-collected small datasets as there has been no public benchmark database for the evaluation. Besides, we haven't found any study that validates such methods for clinical applications yet, e.g., diagnosing cardiac arrhythmias/disease, which could be one major goal of this technology.In this paper, we introduce the Oulu Bio-Face (OBF) database as a benchmark set to fill in the blank. The OBF database includes large number of facial videos with simultaneously recorded reference physiological signals. The data were recorded both from healthy subjects and from patients with atrial fibrillation (AF), which is the most common sustained and widespread cardiac arrhythmia encountered in clinical practice. Accuracy of HR, HRV and RF measured from OBF videos are provided as the baseline results for future evaluation. We also demonstrated that the video-extracted HRV features can achieve promising performance for AF detection, which has never been studied before. From a wider outlook, the remote technology may lead to convenient self-examination in mobile condition for earlier diagnosis of the arrhythmia.
Micro-expression recognition (MER) has attracted much attention with various practical applications, particularly in clinical diagnosis and interrogations. In this paper, we propose a three-stream convolutional neural network (TSCNN) to recognize MEs by learning ME-discriminative features in three key frames of ME videos. We design a dynamic-temporal stream, static-spatial stream, and local-spatial stream module for the TSCNN that respectively attempt to learn and integrate temporal, entire facial region, and facial local region cues in ME videos with the goal of recognizing MEs. In addition, to allow the TSCNN to recognize MEs without using the index values of apex frames, we design a reliable apex frame detection algorithm. Extensive experiments are conducted with five public ME databases: CASME II, SMIC-HS, SAMM, CAS(ME) 2 , and CASME. Our proposed TSCNN is shown to achieve more promising recognition results when compared with many other methods. INDEX TERMS Micro-expression recognition, convolutional neural networks, apex frame location, spatiotemporal information.
Recently, micro-expression recognition has attracted lots of researchers' attention due to its potential value in many practical applications, e.g., lie detection. In this paper, we investigate an interesting and challenging problem in micro-expression recognition, i.e., cross-database micro-expression recognition, in which the training and testing samples come from different micro-expression databases. Under this problem setting, the consistent feature distribution between the training and testing samples originally existing in conventional micro-expression recognition would be seriously broken and hence the performance of most current well-performing micro-expression recognition methods may sharply drop. In order to overcome it, we propose a simple yet effective framework called Domain Regeneration (DR) in this paper. DR framework aims at learning a domain regenerator to regenerate the micro-expression samples from source and target databases respectively such that they can abide by the same or similar feature distributions. Thus, we are able to use the classifier learned based on the labeled source micro-expression samples to predict the label information of the unlabeled target micro-expression samples. To evaluate the proposed DR framework, we conduct extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases. Experimental results show that compared with recent state-of-the-art cross-database emotion recognition methods, the proposed DR framework has more promising performance.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.