Event extraction is one of the most challenging tasks in information extraction. It is a common phenomenon where multiple events exist in the same sentence. However, extracting multiple events is more difficult than extracting a single event. Existing event extraction methods based on sequence models ignore the interrelated information between events because the sequence is too long. In addition, the current argument extraction relies on the results of syntactic dependency analysis, which is complicated and prone to error transmission. In order to solve the above problems, a joint event extraction method based on global event-type guidance and attention enhancement was proposed in this work. Specifically, for multiple event detection, we propose a globaltype guidance method that can detect event types in the candidate sequence in advance to enhance the correlation information between events. For argument extraction, we converted it into a table-filling problem, and proposed a tablefilling method of the attention mechanism, that is simple and can enhance the correlation between trigger words and arguments. The experimental results based on the ACE 2005 dataset showed that the proposed method achieved 1.6% improvement in the task of event detection, and obtained state-of-the-art results in the argument extraction task, which proved the effectiveness of the method.
Medical Dialogue Information Extraction (MDIE) is a promising task for modern medical care systems, which greatly facilitates the development of many real-world applications such as electronic medical record generation, automatic disease diagnosis, etc. Recent methods have firstly achieved considerable performance in Chinese MDIE but still suffer from some inherent limitations, such as poor exploitation of the inter-dependencies in multiple utterances, weak discrimination of the hard samples. In this paper, we propose a contrastive multi-utterance inference (CMUI) method to address these issues. Specifically, we first use a type-aware encoder to provide an efficient encode mechanism toward different categories. Subsequently, we introduce a selective attention mechanism to explicitly capture the dependencies among utterances, which thus constructs a multi-utterance inference. Finally, a supervised contrastive learning approach is integrated into our framework to improve the recognition ability for the hard samples. Extensive experiments show that our model achieves state-of-the-art performance on a public benchmark Chinese-based dataset and delivers significant performance gain on MDIE as compared with baselines. Specifically, we outperform the state-of-the-art results in F1-score by 2.27%, 0.55% in Recall and 3.61% in Precision (The codes that support the findings of this study are openly available in CMUI at https://github.com/jc4357/CMUI.).
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.