Proceedings of the 8th International Workshop on Semantic Evaluation (SemEval 2014) 2014
DOI: 10.3115/v1/s14-2001
|View full text |Cite
|
Sign up to set email alerts
|

SemEval-2014 Task 1: Evaluation of Compositional Distributional Semantic Models on Full Sentences through Semantic Relatedness and Textual Entailment

Abstract: This paper presents the task on the evaluation of Compositional Distributional Semantics Models on full sentences organized for the first time within SemEval-2014. Participation was open to systems based on any approach. Systems were presented with pairs of sentences and were evaluated on their ability to predict human judgments on (i) semantic relatedness and (ii) entailment. The task attracted 21 teams, most of which participated in both subtasks. We received 17 submissions in the relatedness subtask (for a … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
299
0
2

Year Published

2016
2016
2023
2023

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 374 publications
(365 citation statements)
references
References 19 publications
0
299
0
2
Order By: Relevance
“…In data collection for NLI, different annotator decisions about the coreference between entities and events across the two sentences in a pair can lead to very different assignments of pairs to labels (de Marneffe et al, 2008;Marelli et al, 2014a;Bowman et al, 2015). Drawing an example from Bowman et al, the pair "a boat sank in the Pacific Ocean" and "a boat sank in the Atlantic Ocean" can be labeled either CONTRADICTION or NEU-TRAL depending on (among other things) whether the two mentions of boats are assumed to refer to the same entity in the world.…”
Section: Data Collectionmentioning
confidence: 99%
“…In data collection for NLI, different annotator decisions about the coreference between entities and events across the two sentences in a pair can lead to very different assignments of pairs to labels (de Marneffe et al, 2008;Marelli et al, 2014a;Bowman et al, 2015). Drawing an example from Bowman et al, the pair "a boat sank in the Pacific Ocean" and "a boat sank in the Atlantic Ocean" can be labeled either CONTRADICTION or NEU-TRAL depending on (among other things) whether the two mentions of boats are assumed to refer to the same entity in the world.…”
Section: Data Collectionmentioning
confidence: 99%
“…In this study, we verify the effectiveness of CWE on the benchmark dataset of a textual entailment recognition task which consists of data with contradiction, entailment or neutral relation. The dataset is from task 1 in SemEval 2014 [13]. The distribution of data is shown in Table 3.…”
Section: Data Set For Contradiction Detectionmentioning
confidence: 99%
“…We run experiments on benchmark datasets from SemEval 2014 [13]. The experiment results show that the proposed method with CWE performs comparably with top-performing systems in terms of overall classification accuracy.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…CDSMs assume that the meaning of a word can be interpreted by its context and the meaning of a sentence can be derived from its compositions. [17,24] Central to CDSM is the notion of compositionality, i.e., the meaning of complex expressions is determined by the meanings of their constituent expressions and the rules used to combine them. However, access to the annotated text and rules or corpora of symbolic-logic representation is challenging to evaluate in operational settings such as items generated from an item model.…”
Section: Measure Of Semantic Relatednessmentioning
confidence: 99%