Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2014
DOI: 10.3115/v1/p14-1038
|View full text |Cite
|
Sign up to set email alerts
|

Incremental Joint Extraction of Entity Mentions and Relations

Abstract: We present an incremental joint framework to simultaneously extract entity mentions and relations using structured perceptron with efficient beam-search. A segment-based decoder based on the idea of semi-Markov chain is adopted to the new framework as opposed to traditional token-based tagging. In addition, by virtue of the inexact search, we developed a number of new and effective global features as soft constraints to capture the interdependency among entity mentions and relations. Experiments on Automatic C… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
388
0

Year Published

2014
2014
2019
2019

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 472 publications
(413 citation statements)
references
References 15 publications
0
388
0
Order By: Relevance
“…(4) Joint Model (Li and Ji, 2014), a joint structured perception approach, incorporating multilevel linguistic features to extract event triggers and arguments at the same time so that local predictions can be mutually improved. (5) Pattern Recognition (Miao and Grishman, 2015), using a pattern expansion technique to extract event triggers.…”
Section: Baseline Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…(4) Joint Model (Li and Ji, 2014), a joint structured perception approach, incorporating multilevel linguistic features to extract event triggers and arguments at the same time so that local predictions can be mutually improved. (5) Pattern Recognition (Miao and Grishman, 2015), using a pattern expansion technique to extract event triggers.…”
Section: Baseline Methodsmentioning
confidence: 99%
“…The majority of existing methods regard this problem as a classification task, and use machine learning methods with hand-crafted features, such as lexical features (e.g., full word, pos tag), syntactic features (e.g., dependency features) and external knowledge features (WordNet). There also exists some studies leveraging richer evidences like cross-document (Ji et al, 2008), cross-entity (Hong et al, 2011) and joint inference (Li and Ji, 2014). Despite the effectiveness of feature-based methods, we argue that manually designing feature templates is typically labor intensive.…”
Section: Related Workmentioning
confidence: 95%
“…Neelakantan and Collins (2014) looked into the problem of automatically constructing dictionaries with minimal supervision for improved named entity extraction. Li and Ji (2014) presented an approach to perform the task of extraction of mentions and their relations in a joint and incremental manner.…”
Section: Related Workmentioning
confidence: 99%
“…Joint models have been explored for grammar-based approaches to surface realisation using HPSG and CCG (Carroll and Oepen, 2005;Velldal and Oepen, 2006;Espinosa et al, 2008;White and Rajkumar, 2009;White, 2006;Carroll et al, 1999). Joint models have been proposed for word segmentation and POS-tagging (Zhang and Clark, 2010), POS-tagging and syntactic chunking (Sutton et al, 2007), segmentation and normalization (Qian et al, 2015), syntactic linearization and morphologization , parsing and NER (Finkel and Manning, 2009), entity and relation extraction (Li and Ji, 2014) and so on. We propose a first joint model for deep realization, integrating linearization, function word prediction and morphological generation.…”
Section: Related Workmentioning
confidence: 99%