Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 1: Long Papers) 2017
DOI: 10.18653/v1/p17-1004
|View full text |Cite
|
Sign up to set email alerts
|

Neural Relation Extraction with Multi-lingual Attention

Abstract: Relation extraction has been widely used for finding unknown relational facts from the plain text. Most existing methods focus on exploiting mono-lingual data for relation extraction, ignoring massive information from the texts in various languages. To address this issue, we introduce a multi-lingual neural relation extraction framework, which employs monolingual attention to utilize the information within mono-lingual texts and further proposes cross-lingual attention to consider the information consistency a… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2

Citation Types

0
51
0
1

Year Published

2018
2018
2023
2023

Publication Types

Select...
5
4
1

Relationship

1
9

Authors

Journals

citations
Cited by 89 publications
(52 citation statements)
references
References 14 publications
0
51
0
1
Order By: Relevance
“…man (2015) proposed Convolution Networks with multi-sized window kernel Zeng et al (2015). proposed Piecewise Convolution Neural Networks (PCNN) Lin et al (2016Lin et al ( , 2017. improved this approach by proposing PCNN with sentence-level attention.…”
mentioning
confidence: 99%
“…man (2015) proposed Convolution Networks with multi-sized window kernel Zeng et al (2015). proposed Piecewise Convolution Neural Networks (PCNN) Lin et al (2016Lin et al ( , 2017. improved this approach by proposing PCNN with sentence-level attention.…”
mentioning
confidence: 99%
“…Unlike the existing approaches, our approach does not require aligned parallel corpora or machine translation systems. There are also several multilingual RE approaches, e.g., Verga et al (2016);Min et al (2017); Lin et al (2017), where the focus is to improve monolingual RE by jointly modeling texts in multiple languages.…”
Section: Related Workmentioning
confidence: 99%
“…This either limits the scalability of vocabulary size or relies on a strong distribution assumption. Inspired by Vulic and Moens (2016), we generate highly qualified comparable sentences via distant supervision, which is one of the most promising approaches to addressing the issue of sparse training data, and performs well in relation extraction (Lin et al, 2017a;Mintz et al, 2009;Zeng et al, 2015;Hoffmann et al, 2011;Surdeanu et al, 2012). Our comparable sentences may further benefit many other cross-lingual analysis, such as information retrieval .…”
Section: Related Workmentioning
confidence: 99%