Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1124
|View full text |Cite
|
Sign up to set email alerts
|

UWB at SemEval-2016 Task 2: Interpretable Semantic Textual Similarity with Distributional Semantics for Chunks

Abstract: We have built a simple corpus-based system to estimate words similarity in multiple languages with a count-based approach. After training on Wikipedia corpora, our system was evaluated on the multilingual subtask of SemEval-2017 Task 2 and achieved a good level of performance, despite its great simplicity. Our results tend to demonstrate the power of the dis-tributional approach in semantic similarity tasks, even without knowledge of the underlying language. We also show that di-mensionality reduction has a co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2016
2016
2022
2022

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 12 publications
(5 citation statements)
references
References 16 publications
0
5
0
Order By: Relevance
“…From the results we can see that labeling the type was the most challenging. Regarding the overall test results for type and score (+TS) across datasets, UWB (Konopík et al, 2016) and DTSim (Banjade et al, 2016) obtained the best results for the gold chunks scenario, and DTSim and FBK-HLT-NLP (Magnolini et al, 2016) for the system chunks scenario. In addition, DTSim obtained the best overall results even though they have not good results for the Answer-Students dataset.…”
Section: Resultsmentioning
confidence: 99%
See 1 more Smart Citation
“…From the results we can see that labeling the type was the most challenging. Regarding the overall test results for type and score (+TS) across datasets, UWB (Konopík et al, 2016) and DTSim (Banjade et al, 2016) obtained the best results for the gold chunks scenario, and DTSim and FBK-HLT-NLP (Magnolini et al, 2016) for the system chunks scenario. In addition, DTSim obtained the best overall results even though they have not good results for the Answer-Students dataset.…”
Section: Resultsmentioning
confidence: 99%
“…• UWB (Konopík et al, 2016): UWB used three separate supervised classifiers to perform alignment, scoring and typing. They defined a similarity function based on a distribution similarity paradigm: vector composition, lexical semantic vectors and iDF weighting.…”
Section: Systems Tools and Resourcesmentioning
confidence: 99%
“…One related task is interpretable STS, which aims to predict chunk alignment between two sentences . For this task, a variety of supervised approaches were proposed based on neural networks (Konopík et al, 2016;, linear programming (Tekumalla and Jat, 2016), and pretrained models (Maji et al, 2020). However, these methods cannot predict the similarity between sentences because they focus on finding chunk alignment only.…”
Section: Semantic Textual Similaritymentioning
confidence: 99%
“…Konopik et al [40] Introduced a system for estimation of semantic textual similarity in SemEval 2016. The core of this system consisted of exploiting distributional semantics to compare the similarity of sentence chunks.…”
Section: Literature Reviewmentioning
confidence: 99%