We propose a novel convolutional neural network approach to address the fine-grained recognition problem of multi-view dynamic facial action unit detection. We leverage recent gains in large-scale object recognition by formulating the task of predicting the presence or absence of a specific action unit in a still image of a human face as holistic classification. We then explore the design space of our approach by considering both shared and independent representations for separate action units, and also different CNN architectures for combining color and motion information. We then move to the novel setup of the FERA 2017 Challenge, in which we propose a multi-view extension of our approach that operates by first predicting the viewpoint from which the video was taken, and then evaluating an ensemble of action unit detectors that were trained for that specific viewpoint. Our approach is holistic, efficient, and modular, since new action units can be easily included in the overall system. Our approach significantly outperforms the baseline of the FERA 2017 Challenge, with an absolute improvement of 14% on the F1-metric. Additionally, it compares favorably against the winner of the FERA 2017 challenge. Code source is available at https: // github. com/ BCV-Uniandes/ AUNets .
Quantifying the stress field induced into a piece when it is loaded is important for engineering areas since it allows the possibility to characterize mechanical behaviors and fails caused by stress. For this task, digital photoelasticity has been highlighted by its visual capability of representing the stress information through images with isochromatic fringe patterns. Unfortunately, demodulating such fringes remains a complicated process that, in some cases, depends on several acquisitions, e.g., pixel-by-pixel comparisons, dynamic conditions of load applications, inconsistence corrections, dependence of users, fringe unwrapping processes, etc. Under these drawbacks and taking advantage of the power results reported on deep learning, such as the fringe unwrapping process, this paper develops a deep convolutional neural network for recovering the stress field wrapped into color fringe patterns acquired through digital photoelasticity studies. Our model relies on an untrained convolutional neural network to accurately demodulate the stress maps by inputting only one single photoelasticity image. We demonstrate that the proposed method faithfully recovers the stress field of complex fringe distributions on simulated images with an averaged performance of 92.41% according to the SSIM metric. With this, experimental cases of a disk and ring under compression were evaluated, achieving an averaged performance of 85% in the SSIM metric. These results, on the one hand, are in concordance with new tendencies in the optic community to deal with complicated problems through machine-learning strategies; on the other hand, it creates a new perspective in digital photoelasticity toward demodulating the stress field for a wider quantity of fringe distributions by requiring one single acquisition.
Extending photoelasticity studies to industrial applications, such as
photoelastic sensors, implies to overcome limitations like need of
experts, experimental over-calibration, or reduced number of
experiments. This paper introduces a computational hybrid technique
toward this goal.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.