Proceedings of the 10th International Workshop on Semantic Evaluation (SemEval-2016) 2016
DOI: 10.18653/v1/s16-1113
|View full text |Cite
|
Sign up to set email alerts
|

UNBNLP at SemEval-2016 Task 1: Semantic Textual Similarity: A Unified Framework for Semantic Processing and Evaluation

Abstract: In this paper we consider several approaches to predicting semantic textual similarity using word embeddings, as well as methods for forming embeddings for larger units of text. We compare these methods to several baselines, and find that none of them outperform the baselines. We then consider both a supervised and unsupervised approach to combining these methods which achieve modest improvements over the baselines.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
2
0

Year Published

2017
2017
2019
2019

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 2 publications
0
2
0
Order By: Relevance
“…We accomplished comparisons between our method and state-of-the-art methods on the basis of the SICK and the SemEval ( 2016) datasets (Bentivogli et al, 2016;Agirre et al, 2016). Evaluations of some of these methods have been performed recently on the SemEval (2016) STS competitions (King et al, 2016;Cer et al, 2017).…”
Section: Datasetsmentioning
confidence: 99%
“…We accomplished comparisons between our method and state-of-the-art methods on the basis of the SICK and the SemEval ( 2016) datasets (Bentivogli et al, 2016;Agirre et al, 2016). Evaluations of some of these methods have been performed recently on the SemEval (2016) STS competitions (King et al, 2016;Cer et al, 2017).…”
Section: Datasetsmentioning
confidence: 99%
“…We present in this subsection the state-of-the-art in STS-Task 1 using Paragraph Vector since it is the most relevant to our work. King et al (2016), for instance make use of Paragraph Vectors as one approach in the English monolingual sub-task. Results are reported for a single vector size and the Cosine metric which is employed in obtaining the similarity score between sentences.…”
Section: Semantic Textual Similaritymentioning
confidence: 99%