Recently, the recognition task of spontaneous facial micro-expressions has attracted much attention with its various real-world applications. Plenty of handcrafted or learned features have been employed for a variety of classifiers and achieved promising performances for recognizing micro-expressions. However, the micro-expression recognition is still challenging due to the subtle spatiotemporal changes of micro-expressions. To exploit the merits of deep learning, we propose a novel deep recurrent convolutional networks based micro-expression recognition approach, capturing the spatial-temporal deformations of micro-expression sequence. Specifically, the proposed deep model is constituted of several recurrent convolutional layers for extracting visual features and a classificatory layer for recognition. It is optimized by an end-to-end manner and obviates manual feature design. To handle sequential data, we exploit two types of extending the connectivity of convolutional networks across temporal domain, in which the spatiotemporal deformations are modeled in views of facial appearance and geometry separately. Besides, to overcome the shortcomings of limited and imbalanced training samples, temporal data augmentation strategies as well as a balanced loss are jointly used for our deep network. By performing the experiments on three spontaneous microexpression datasets, we verify the effectiveness of our proposed micro-expression recognition approach compared to the state-ofthe-art methods.
Existing enhancement methods are empirically expected to help the high-level end computer vision task: however, that is observed to not always be the case in practice. We focus on object or face detection in poor visibility enhancements caused by bad weathers (haze, rain) and low light conditions. To provide a more thorough examination and fair comparison, we introduce three benchmark sets collected in real-world hazy, rainy, and lowlight conditions, respectively, with annotated objects/faces. We launched the UG 2+ challenge Track 2 competition in IEEE CVPR 2019, aiming to evoke a comprehensive discussion and exploration about whether and how low-level vision techniques can benefit the high-level automatic visual recognition in various scenarios. To our best knowledge, this is the first and currently largest effort of its kind. Baseline results by cascading existing enhancement and detection models are reported, indicating the highly challenging nature of our new data as well as the large room for further technical innovations. Thanks to a large participation from the research community, we are able to analyze representative team solutions, striving to better identify the strengths and limitations of existing mindsets as well as the future directions.Index Terms-Poor visibility environment, object detection, face detection, haze, rain, low-light conditions *The first two authors Wenhan Yang and Ye Yuan contributed equally. Ye Yuan and Wenhan Yang helped prepare the dataset proposed for the UG2+ Challenges, and were the main responsible members for UG2+ Challenge 2019 (Track 2) platform setup and technical support. Wenqi Ren, Jiaying Liu, Walter J. Scheirer, and Zhangyang Wang were the main organizers of the challenge and helped prepare the dataset, raise sponsors, set up evaluation environment, and improve the technical submission. Other authors are the group members of winner teams in UG2+ challenge Track 2 contributing to the winning methods.
Popular microblogging service has attracted much attention around the world recently. With tremendous amount of tweets published each day, social event detection is becoming one of the most challenging research topics, especially for geographical social event. This paper proposes a novel geographical social event detection approach by mining geographical temporal pattern and analyzing the content of tweets. For the tweets published by users in the geographical area at each time unit, we first estimate its geographical temporal pattern based on the alternation regularity of tweets. Furthermore, we discovery the unusual geographical area by more frequent alternation of tweet count, and adopt adaptive K-means clustering algorithm for the tweets published in the geographical area. Finally, the geographical social event is detected by the number of the tweets in the cluster. We implement and validate our approach on realistic data collected from real-world social media websites. Experimental results show that our method can detect geographical social event with better performance than traditional methods. In addition, vivid demonstration of geographical social event can be effectively performed by our method.
Recently, automatic 3D caricature generation has attracted much attention from both the research community and the game industry. Machine learning has been proven effective in the automatic generation of caricatures. However, the lack of 3D caricature samples makes it challenging to train a good model. This paper addresses this problem by two steps. First, the training set is enlarged by reconstructing 3D caricatures. We reconstruct 3D caricatures based on some 2D caricature samples with a Principal Component Analysis (PCA)-based method. Secondly, between the 2D real faces and the enlarged 3D caricatures, a regressive model is learnt by the semi-supervised manifold regularization (MR) method. We then predict 3D caricatures for 2D real faces with the learnt model. The experiments show that our novel approach synthesizes the 3D caricature more effectively than traditional methods. Moreover, our system has been applied successfully in a massive multi-user educational game to provide human-like avatars.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.