By modeling the context information, ELMo and BERT have successfully improved the state-of-the-art of word representation, and demonstrated their effectiveness on the Named Entity Recognition task. In this paper, in addition to such context modeling, we propose to encode the prior knowledge of entities from an external knowledge base into the representation, and introduce a Knowledge-Graph Augmented Word Representation or KAWR for named entity recognition. Basically, KAWR provides a kind of knowledge-aware representation for words by 1) encoding entity information from a pre-trained KG embedding model with a new recurrent unit (GERU), and 2) strengthening context modeling from knowledge wise by providing a relation attention scheme based on the entity relations defined in KG. We demonstrate that KAWR, as an augmented version of the existing linguistic word representations, promotes F1 scores on 5 datasets in various domains by +0.46∼+2.07. Better generalization is also observed for KAWR on new entities that cannot be found in the training sets.
Over the past three decades, there has been sustained research activity in emotion recognition from faces, powered by the popularity of smart devices and the development of improved machine learning, resulting in the creation of recognition systems with high accuracy. While research has commonly focused on single images, recent research has also made use of dynamic video data. This paper presents CNN-RNN (Convolutional Neural Network -Recurrent Neural Network) based emotion recognition using videos from the ADFES database, and we present the results in the arousal-valence space, rather than assigning a discrete emotion. As well as traditional performance metrics, we also design a new performance metric, PN accuracy, to distinguish between positive and negative emotions. We demonstrate improved performance with a smaller RNN than the initial pre-trained model, and report a peak accuracy of 0.58, with peak PN accuracy of 0.76, which shows our approach is very capable distinguishing between positive and negative emotions. We also present a detailed analysis of system performance, using new valence-arousal domain temporal visualisations to show transitions in recognition over time, demonstrating the importance of context based information in emotion recognition.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.