Facial expression is temporally dynamic event which can be decomposed into a set of muscle motions occurring in different facial regions over various time intervals.For dynamic expression recognition, two key issues, temporal alignment and semantics-aware dynamic representation, must be taken into account. In this paper, we attempt to solve both problems via manifold modeling of videos based on a novel mid-level representation, i.e. expressionlet. Specifically, our method contains three key components: 1) each expression video clip is modeled as a spatiotemporal manifold (STM) formed by dense low-level features; 2) a Universal Manifold Model (UMM) is learned over all low-level features and represented as a set of local ST modes to statistically unify all the STMs. 3) the local modes on each STM can be instantiated by fitting to UM-M, and the corresponding expressionlet is constructed by modeling the variations in each local ST mode. With above strategy, expression videos are naturally aligned both spatially and temporally. To enhance the discriminative power, the expressionlet-based STM representation is further processed with discriminant embedding. Our method is evaluated on four public expression databases, CK+, MMI, Oulu-CASIA, and AFEW. In all cases, our method reports results better than the known state-of-the-art.
In this paper, we present the method for our submission to the Emotion Recognition in the Wild Challenge (EmotiW 2014). The challenge is to automatically classify the emotions acted by human subjects in video clips under realworld environment. In our method, each video clip can be represented by three types of image set models (i.e. linear subspace, covariance matrix, and Gaussian distribution) respectively, which can all be viewed as points residing on some Riemannian manifolds. Then different Riemannian kernels are employed on these set models correspondingly for similarity/distance measurement. For classification, three types of classifiers, i.e. kernel SVM, logistic regression, and partial least squares, are investigated for comparisons. Finally, an optimal fusion of classifiers learned from different kernels and different modalities (video and audio) is conducted at the decision level for further boosting the performance. We perform an extensive evaluation on the challenge data (including validation set and blind test set), and evaluate the effects of different strategies in our pipeline. The final recognition accuracy achieved 50.4% on test set, with a significant gain of 16.7% above the challenge baseline 33.7%.
Background Studies on the prospective association of body composition with mortality in US general populations are limited. We aimed to examine this association by utilizing data from the National Health and Nutrition Examination Survey (NHANES), a representative sample of US adults, linked with data from the National Death Index. Methods We analysed data of NHANES 1988NHANES -1994NHANES and 1999NHANES -2014 818 participants [50.6% female, baseline mean age: 45.0 years (SE, 0.2)]. Predicted fat mass and lean mass were calculated using the validated sex-specific anthropometric prediction equations developed by the NHANES based on individual age, race, height, weight, and waist circumference. Body composition and other covariates were measured at only one time point. Multivariable Cox regression was used to investigate the associations of predicted fat mass and lean mass with overall and cause-specific mortality, adjusting for potential confounders. Interactions between age and body composition on mortality were examined with likelihood ratio testing. Results Mean predicted fat mass was 24.1 kg [95% confidence interval (CI): 23.9-24.3) for male participants and 29.9 kg (95% CI: 29.6-30.1) for female participants, while mean predicted lean mass was 59.3 kg (95% CI: 59.1-59.5) for male participants and 41.7 kg (95% CI: 41.5-41.8) for female participants. During a median period of 9.7 years from the survey, 10 408 deaths occurred. When predicted fat and lean mass were both included in the model, predicted fat mass showed a U-shaped association with all-cause mortality, with significantly higher risk at two ends:
The association between carbohydrate intake and the risk of hypertension remains uncertain. We aimed to evaluate the prospective relations of the amount and type of carbohydrate intake with new-onset hypertension. A total of 12 177 adults who were free of hypertension at baseline from the China Health and Nutrition Survey were included. Dietary intake was measured by 3 consecutive 24-hour dietary recalls combined with a household food inventory. The study outcome was new-onset hypertension, defined as systolic blood pressure ≥140 mm Hg or diastolic blood pressure ≥90 mm Hg or diagnosed by physician or under antihypertensive treatment during the follow-up. A total of 4269 subjects developed hypertension during 95 157 person-years of follow-up. Overall, there was a U-shaped association between the percentage energy consumed from total carbohydrate (mean, 56.7%; SD, 10.7) and new-onset hypertension (
P
for nonlinearity <0.001), with the lowest risk observed at 50% to 55% carbohydrate intake. The increased risks were mainly found in those with lower intake of high-quality carbohydrate (mean, 6.4%; SD, 5.6) or higher intake of low-quality carbohydrate (mean, 47.0%; SD, 13.0). Moreover, there was an inverse association between the plant-based low-carbohydrate scores for low-quality carbohydrate and new-onset hypertension. However, there was a U-shaped association between the animal-based low-carbohydrate scores for low-quality carbohydrate and new-onset hypertension (
P
for nonlinearity <0.001). In summary, both high and low percentages of carbohydrate diets were associated with increased risk of new-onset hypertension, with minimal risk at 50% to 55% carbohydrate intake. Our findings support the intake of high-quality carbohydrate, and the substitution of plant-based products for low-quality carbohydrate for prevention of hypertension.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.