Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014) 2014
DOI: 10.3115/v1/s14-2044
|View full text |Cite
|
Sign up to set email alerts
|

ECNU: One Stone Two Birds: Ensemble of Heterogenous Measures for Semantic Relatedness and Textual Entailment

Abstract: This paper presents our approach to semantic relatedness and textual entailment subtasks organized as task 1 in SemEval 2014. Specifically, we address two questions: (1) Can we solve these two subtasks together? (2) Are features proposed for textual entailment task still effective for semantic relatedness task? To address them, we extracted seven types of features including text difference measures proposed in entailment judgement subtask, as well as common text similarity measures used in both subtasks. Then … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
61
0

Year Published

2014
2014
2021
2021

Publication Types

Select...
4
4
1

Relationship

0
9

Authors

Journals

citations
Cited by 81 publications
(61 citation statements)
references
References 13 publications
0
61
0
Order By: Relevance
“…RTE systems vary considerably in their choice of representation and inference procedure. In the most recent shared task on RTE, some systems used deep logical representations of text, allowing them to invoke theorem provers (Bjerva et al, 2014) or Markov Logic Networks (Beltagy et al, 2014) to perform the inference, while others used shallower representations, relying on machine learning to perform inference (Lai and Hockenmaier, 2014;Zhao et al, 2014). Systems based on natural logic (MacCartney and Manning, 2007) use natural language as a representation, but still perform inference using a structured algebra rather than a statistical model.…”
Section: Recognizing Textual Entailmentmentioning
confidence: 99%
“…RTE systems vary considerably in their choice of representation and inference procedure. In the most recent shared task on RTE, some systems used deep logical representations of text, allowing them to invoke theorem provers (Bjerva et al, 2014) or Markov Logic Networks (Beltagy et al, 2014) to perform the inference, while others used shallower representations, relying on machine learning to perform inference (Lai and Hockenmaier, 2014;Zhao et al, 2014). Systems based on natural logic (MacCartney and Manning, 2007) use natural language as a representation, but still perform inference using a structured algebra rather than a statistical model.…”
Section: Recognizing Textual Entailmentmentioning
confidence: 99%
“…In the table, systems in bold are those for which the authors submitted a paper (Ferrone and Zanzotto, 2014;Bjerva et al, 2014;Beltagy et al, 2014;Lai and Hockenmaier, 2014;Alves et al, 2014;León et al, 2014;Bestgen, 2014;Zhao et al, 2014;Vo et al, 2014;Biçici and Way, 2014;Lien and Kouylekov, 2014;Jimenez et al, 2014;Proisl and Evert, 2014;Gupta et al, 2014). For the others, we used the brief description sent with the system's results, double-checking the information with the authors.…”
Section: Approachesmentioning
confidence: 99%
“…Model r ρ MSE Meaning Factory (Jiménez et al, 2014) 0.8268 0.7721 0.3224 ECNU (Zhao et al, 2014) 0.8414 --BiLSTM (Tai et al, 2015) 0.8567 0.7966 0.2736 Tree-LSTM (Tai et al, 2015) 0.8676 0.8083 0.2532 MPCNN (He et al, 2015) 0 Wang and Ittycheriah (2015) 0.746 0.820 QA-LSTM (Tan et al, 2015) 0.728 0.832 Att-pooling (dos Santos et al, 2016) 0.753 0.851 LDC (Wang et al, 2016b) 0.771 0.845 MPCNN (He et al, 2015) 0.777 0.836 PWIM 0.738 0.827 NCE-CNN (Rao et al, 2016) 0.801 0.877 BiMPM (Wang et al, 2017) 0.802 0.875 IWAN-att (Proposed) 0.822 0.889 IWAN-skip (Proposed) 0.801 0.861 Table 3: Test results on Clean version TrecQA.…”
Section: Training Detailsmentioning
confidence: 99%