Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1279
|View full text |Cite
|
Sign up to set email alerts
|

Matching the Blanks: Distributional Similarity for Relation Learning

Abstract: General purpose relation extractors, which can model arbitrary relations, are a core aspiration in information extraction. Efforts have been made to build general purpose extractors that represent relations with their surface forms, or which jointly embed surface forms with relations from an existing knowledge graph. However, both of these approaches are limited in their ability to generalize. In this paper, we build on extensions of Harris' distributional hypothesis to relations, as well as recent advances in… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

1
85
0
1

Year Published

2020
2020
2021
2021

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 349 publications
(129 citation statements)
references
References 17 publications
1
85
0
1
Order By: Relevance
“…BERT BERT is a pre-trained language model that has been widely used in NLP tasks. We use BERT for RE following Baldini Soares et al (2019). In short, we highlight entity mentions in sentences by special markers and use the concatenations of entity representations for classification.…”
Section: Models and Datasetmentioning
confidence: 99%
See 2 more Smart Citations
“…BERT BERT is a pre-trained language model that has been widely used in NLP tasks. We use BERT for RE following Baldini Soares et al (2019). In short, we highlight entity mentions in sentences by special markers and use the concatenations of entity representations for classification.…”
Section: Models and Datasetmentioning
confidence: 99%
“…The following two formats are adopted by previous literature and are close to the real-world RE scenarios: (with both context and highlighted entity mentions) is provided. To let the models know where the entity mentions are, we use position embeddings (Zeng et al, 2014) for the CNN model and special entity markers (Zhang et al, 2019;Baldini Soares et al, 2019) for the pre-trained BERT. Context+Type (C+T) We replace entity mentions with their types provided in TACRED.…”
Section: Experimental Settingsmentioning
confidence: 99%
See 1 more Smart Citation
“…A variety of slot-filling approaches have been built on top of these deep learning advancements (Kurata et al, 2016;Qin et al, 2019). The proposed baseline for our task (Zong et al, 2020) modifies Bert model for slot-filling problem inspired by Baldini Soares et al (2019). Due to the excellent performance offered by Bert (Devlin et al, 2019) and Baldini Soares et al 2019, we build upon this baseline approach.…”
Section: Related Workmentioning
confidence: 99%
“…It takes a tweet t as input and encloses the candidate slot s, within the tweet, inside special entity start < E > and end < /E > markers. The Bert hidden representation of token < E > is then processed through a fully connected layer with softmax activation to make the binary prediction for a task (Baldini Soares et al, 2019). Since many slot-filling Tweet CT-Bert [CLS] ... Figure 2: Slot-Filling Model, described in Section §4.1.…”
Section: Slot-fillingmentioning
confidence: 99%