Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers) 2014
DOI: 10.3115/v1/p14-1079
|View full text |Cite
|
Sign up to set email alerts
|

Distant Supervision for Relation Extraction with Matrix Completion

Abstract: The essence of distantly supervised relation extraction is that it is an incomplete multi-label classification problem with sparse and noisy features. To tackle the sparsity and noise challenges, we propose solving the classification problem using matrix completion on factorized matrix of minimized rank. We formulate relation classification as completing the unknown labels of testing items (entity pairs) in a sparse matrix that concatenates training and testing textual features with training labels. Our algori… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
22
0

Year Published

2015
2015
2022
2022

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 47 publications
(22 citation statements)
references
References 14 publications
(17 reference statements)
0
22
0
Order By: Relevance
“…DeepDive 's model of KBC is motivated by the recent attempts of using machine learning-based technique for KBC [3,4,24,38,46,52,56] and the line of research that aims to improve the quality of a specific component of KBC system [7,12,15,21,26,27,31–33,35,39,42,47,48,51,53,54]. When designing DeepDive , we used these systems as test cases to justify the generality of our framework.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…DeepDive 's model of KBC is motivated by the recent attempts of using machine learning-based technique for KBC [3,4,24,38,46,52,56] and the line of research that aims to improve the quality of a specific component of KBC system [7,12,15,21,26,27,31–33,35,39,42,47,48,51,53,54]. When designing DeepDive , we used these systems as test cases to justify the generality of our framework.…”
Section: Related Workmentioning
confidence: 99%
“…When designing DeepDive , we used these systems as test cases to justify the generality of our framework. In fact, we find that DeepDive is able to model thirteen of these popular KBC systems [15,23,26,27,31–33,35,39,42,51,53,54]. …”
Section: Related Workmentioning
confidence: 99%
“…References [14,15] are extensions to [13] with additional layers or add-in penalty factors. Recent work also included the embedding models that transfer the relation extraction problem into a translation model like h`r « t [20][21][22], and the probability matrix factorization (PMF) models from [11,23] in which training and testing are carried out jointly. Besides, Fan et al [24] presented a novel framework by integrating active learning and weakly supervised learning.…”
Section: Distant Supervision For Relation Extractionmentioning
confidence: 99%
“…Ritter et al [15] added two penalty factors to model the missing of texts and the missing of KB, and they also considered some additional information such as popularity of entities. Angeli et al [23] added a bias factor b to their PMF algorithm to model the noise. Xiang et al [30] computed the value of two types of biases to model the correctness and incorrectness for each group-level label.…”
Section: Noise Reduction For Distant Supervisionmentioning
confidence: 99%
“…Their system shows good accuracy for extracting 52 types of relations which suggests the applicability of distant supervision for general RE. In (Fan et al, 2014), the distantly supervised relation extraction was solved as a matrix completion problem. (Yao et al, 2012) uses an unsupervised approach to handle the problem of Polysemy where the same pattern can have several meanings.…”
Section: Related Workmentioning
confidence: 99%