Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conferen 2019
DOI: 10.18653/v1/d19-1296
|View full text |Cite
|
Sign up to set email alerts
|

Weakly Supervised Multilingual Causality Extraction from Wikipedia

Abstract: We present a method for extracting causality knowledge from Wikipedia, such as Protectionism → Trade war, where the cause and effect entities correspond to Wikipedia articles. Such causality knowledge is easy to verify by reading corresponding Wikipedia articles, to translate to multiple languages through Wikidata, and to connect to knowledge bases derived from Wikipedia. Our method exploits Wikipedia article sections that describe causality and the redundancy stemming from the multilinguality of Wikipedia. Ex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
12
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 49 publications
0
12
0
Order By: Relevance
“…In the first research line, early methods usually design various features tailored for causal expressions, such as lexical and syntactic patterns Girju, 2013, 2014a,b), causality cues or markers (Riaz and Girju, 2010;Do et al, 2011;Hidey and McKeown, 2016), statistical information (Beamer and Girju, 2009;Hashimoto et al, 2014), and temporal patterns (Riaz and Girju, 2014a;Ning et al, 2018). Then, researchers resort to a large amount of labeled data to mitigate the efforts of feature engineering and to learn diverse causal expressions (Hu et al, 2017;Hashimoto, 2019). To alleviate the annotation cost, recent methods leverage Pre-trained Language Models (PLMs, e.g., BERT (Devlin et al, 2019)) for the ECI task and have achieved SOTA performance (Kadowaki et al, 2019;Zuo et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…In the first research line, early methods usually design various features tailored for causal expressions, such as lexical and syntactic patterns Girju, 2013, 2014a,b), causality cues or markers (Riaz and Girju, 2010;Do et al, 2011;Hidey and McKeown, 2016), statistical information (Beamer and Girju, 2009;Hashimoto et al, 2014), and temporal patterns (Riaz and Girju, 2014a;Ning et al, 2018). Then, researchers resort to a large amount of labeled data to mitigate the efforts of feature engineering and to learn diverse causal expressions (Hu et al, 2017;Hashimoto, 2019). To alleviate the annotation cost, recent methods leverage Pre-trained Language Models (PLMs, e.g., BERT (Devlin et al, 2019)) for the ECI task and have achieved SOTA performance (Kadowaki et al, 2019;Zuo et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…As shown in Figure 1, given the text "... the outage 2 was caused by a terrestrial break in the fiber in Egypt ...", an ECI model should predict if there is a causal relation between two events triggered by "outage 2 " and "break". Causality can reveal reliable structures of texts, which is beneficial to widespread applications, such as machine reading comprehension (Berant et al, 2014), question answering (Oh et al, 2016), and future event forecasting (Hashimoto, 2019).…”
Section: Introductionmentioning
confidence: 99%
“…Event causality identification (ECI) is a very important task in natural language processing area, which has attracted extensive attention in the past few years. Early studies for the task are feature-based methods which utilize lexical and syntactic features (Riaz and Girju, 2013;Gao et al, 2019), explicit causal patterns (Beamer and Girju, 2009;Do et al, 2011;, and statistical causal associations (Riaz and Girju, 2014;Hashimoto et al, 2014;Hashimoto, 2019) for the task. With the development of deep learning, neural network-based methods have been proposed for the task and achieved the state-of-the-art performance (Kruengkrai et al, 2017;Kadowaki et al, 2019;Zuo et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…There are a lot of work in temporal (D'Souza and Ng, 2013;Chambers et al, 2014;Ning et al, 2018b;Meng and Rumshisky, 2018;Han et al, 2019;Vashishtha et al, 2020;Wright-Bettner et al, 2020) and causal (Bethard and Martin, 2008;Do et al, 2011;Riaz and Girju, 2013;Roemmele and Gordon, 2018;Hashimoto, 2019) relation extraction. Mirza and Tonelli (2016) and Ning et al (2018a) extract both in a single framework.…”
Section: Extracting Causal and Temporal Relationsmentioning
confidence: 99%