In this paper we present the results of our recent study on comparing the emotion expression recognition abilities of children diagnosed with high functioning Autism (ASD) with those of typically developing (TD) children through use of a humanoid robot, Zeno. In our study we investigated the effect of incorporating gestures to the emotion expression prediction accuracy of both child groups. Although the idea that ASD individuals suffer from general emotion recognition deficits is widely assumed [1], we found no significant impairment in the general emotion prediction. However, a specific deficit in correctly identifying Fear was found for children with Autism when compared to the TD children. Furthermore, we found that gestures can significantly impact the prediction accuracy of both ASD and TD children in a negative or positive manner depending on the specific expression. Thus, the use of gestures for conveying emotional expressions by a humanoid robot in a social skill therapy setting is relevant. The methodology and experimental protocol are presented and additional discussion of the Zeno R-50 robot used is given.
Recognizing facial expression in a wild setting has remained a challenging task in computer vision. The World Wide Web is a good source of facial images which most of them are captured in uncontrolled conditions. In fact, the Internet is a Word Wild Web of facial images with expressions. This paper presents the results of a new study on collecting, annotating, and analyzing wild facial expressions from the web. Three search engines were queried using 1250 emotion related keywords in six different languages and the retrieved images were mapped by two annotators to six basic expressions and neutral. Deep neural networks and noise modeling were used in three different training scenarios to find how accurately facial expressions can be recognized when trained on noisy images collected from the web using query terms (e.g. happy face, laughing man, etc)? The results of our experiments show that deep neural networks can recognize wild facial expressions with an accuracy of 82.12%.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.