Multi-hop machine reading comprehension (MRC) task aims to enable models to answer the compound question according to the bridging information. Existing methods that use graph neural networks to represent multiple granularities such as entities and sentences in documents update all nodes synchronously, ignoring the fact that multi-hop reasoning has a certain logical order across granular levels. In this paper, we introduce an Asynchronous Multi-grained Graph Network (AMGN) for multi-hop MRC. First, we construct a multigrained graph containing entity and sentence nodes. Particularly, we use independent parameters to represent relationship groups defined according to the level of granularity. Second, an asynchronous update mechanism based on multi-grained relationships is proposed to mimic human multi-hop reading logic. Besides, we present a question reformulation mechanism to update the latent representation of the compound question with updated graph nodes. We evaluate the proposed model on the HotpotQA dataset and achieve top competitive performance in distractor setting compared with other published models. Further analysis shows that the asynchronous update mechanism can effectively form interpretable reasoning chains at different granularity levels.
Conversational emotion recognition (CER) is a significant task due to its application in human–computer interaction. Existing work treats CER as an utterance‐level classification task without considering that empathic response also reflects contextual emotion understanding. Previous work has proven that accurate recognition of emotions in the dialogue history is helpful to generate high‐fit responses. In this paper, we investigate whether this conclusion is a sufficient and necessary condition. Specifically, we define an auxiliary empathic multiturn dialogue generation (MDG) task to enhance emotion understanding. Correspondingly, we present a Sequence‐to‐Sequence oriented framework that combines CER and MDG in a multitask learning manner to verify the complementarity between the two tasks. First, we use alternate recurrent neural networks to encode the content of historical utterances and represent the states of multiparty emotions, which are used for emotion classification. Second, since most MDG methods ignore the emotional coherence of the dialogue context itself, we use affine transformation to fuse hidden states of content and emotions to initialize the decoder. Finally, at each step of generation, an attention mechanism is used to fuse information from the dialogue history to ensure emotional coherence. The CER results of our models outperform the state‐of‐the‐art on three prevalent emotional dialogue data sets. Further analysis demonstrates the mutual promotion and empathy interpretability between MDG and CER. Furthermore, our framework is scalable for different coding strategies and multimodal fusion. To the best of our knowledge, this is the first work to explore CER from the perspective of empathy through multitask learning with dialogue generation.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.