Despite recent interest and advances in facial micro-expression research, there is still plenty of room for improvement in terms of micro-expression recognition. Conventional feature extraction approaches for micro-expression video consider either the whole video sequence or a part of it, for representation. However, with the high-speed video capture of microexpressions (100-200 fps), are all frames necessary to provide a sufficiently meaningful representation? Is the luxury of data a bane to accurate recognition? A novel proposition is presented in this paper, whereby we utilize only two images per video, namely, the apex frame and the onset frame. The apex frame of a video contains the highest intensity of expression changes among all frames, while the onset is the perfect choice of a reference frame with neutral expression. A new feature extractor, Bi-Weighted Oriented Optical Flow (Bi-WOOF) is proposed to encode essential expressiveness of the apex frame. We evaluated the proposed method on five micro-expression databases-CAS(ME) 2 , CASME II, SMIC-HS, SMIC-NIR and SMIC-VIS. Our experiments lend credence to our hypothesis, with our proposed technique achieving a state-of-the-art F1-score recognition performance of 0.61 and 0.62 in the high frame rate CASME II and SMIC-HS databases respectively.
Facial micro-expression (ME) recognition has posed a huge challenge to researchers for its subtlety in motion and limited databases. Recently, handcrafted techniques have achieved superior performance in micro-expression recognition but at the cost of domain specificity and cumbersome parametric tunings. In this paper, we propose an Enriched Long-term Recurrent Convolutional Network (ELRCN) that first encodes each micro-expression frame into a feature vector through CNN module(s), then predicts the micro-expression by passing the feature vector through a Long Short-term Memory (LSTM) module. The framework contains two different network variants: (1) Channel-wise stacking of input data for spatial enrichment, (2) Feature-wise stacking of features for temporal enrichment. We demonstrate that the proposed approach is able to achieve reasonably good performance, without data augmentation. In addition, we also present ablation studies conducted on the framework and visualizations of what CNN "sees" when predicting the micro-expression classes.
The Internet of Things (IoT) has penetrated deeply into our lives and the number of IoT devices per person is expected to increase substantially over the next few years. Due to the characteristics of IoT devices (i.e., low power and low battery), usage of these devices in critical applications requires sophisticated security measures. Researchers from academia and industry now increasingly exploit the concept of blockchains to achieve security in IoT applications. The basic idea of the blockchain is that the data generated by users or devices in the past are verified for correctness and cannot be tampered once it is updated on the blockchain. Even though the blockchain supports integrity and non-repudiation to some extent, confidentiality and privacy of the data or the devices are not preserved. The content of the data can be seen by anyone in the network for verification and mining purposes. In order to address these privacy issues, we propose a new privacy-preserving blockchain architecture for IoT applications based on attribute-based encryption (ABE) techniques. Security, privacy, and numerical analyses are presented to validate the proposed model.
Over the last few years, automatic facial micro-expression analysis has garnered increasing attention from experts across different disciplines because of its potential applications in various fields such as clinical diagnosis, forensic investigation and security systems. Advances in computer algorithms and video acquisition technology have rendered machine analysis of facial micro-expressions possible today, in contrast to decades ago when it was primarily the domain of psychiatrists where analysis was largely manual. Indeed, although the study of facial micro-expressions is a well-established field in psychology, it is still relatively new from the computational perspective with many interesting problems. In this survey, we present a comprehensive review of state-of-the-art databases and methods for micro-expressions spotting and recognition. Individual stages involved in the automation of these tasks are also described and reviewed at length. In addition, we also deliberate on the challenges and future directions in this growing field of automatic facial micro-expression analysis.
Micro-expression usually occurs at high-stakes situations and may provide useful information in the field of behavioral psychology for better interpretion and analysis. Unfortunately, it is technically challenging to detect and recognize micro-expressions due to its brief duration and the subtle facial distortions. Apex frame, which is the instant indicating the most expressive emotional state in a video, is effective to classify the emotion in that particular frame. In this work, we present a novel method to spot the apex frame of a spontaneous micro-expression video sequence. A binary search approach is employed to locate the index of the frame in which the peak facial changes occur. Features from specific facial regions are extracted to better represent and describe the expression details. The defined facial regions are selected based on the action unit and landmark coordinates of the subject, in which case these processes are automated. We consider three distinct feature descriptors to evaluate the reliability of the proposed approach. Improvements of at least 20% are achieved when compared to the baselines.
This is the accepted version of the paper.This version of the publication may differ from the final published version. Abstract-Emerging cloud computing infrastructure replaces traditional outsourcing techniques and provides flexible services to clients at different locations via Internet. This leads to the requirement for data classification to be performed by potentially untrusted servers in the cloud. Within this context, classifier built by the server can be utilized by clients in order to classify their own data samples over the cloud. In this paper, we study a privacy-preserving (PP) data classification technique where the server is unable to learn any knowledge about clients' input data samples while the server side classifier is also kept secret from the clients during the classification process. More specifically, to the best of our knowledge, we propose the first known client-server data classification protocol using support vector machine. The proposed protocol performs PP classification for both two-class and multi-class problems. The protocol exploits properties of Pailler homomorphic encryption and secure two-party computation. At the core of our protocol lies an efficient, novel protocol for securely obtaining the sign of Pailler encrypted numbers.
Permanent repository link
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.