Proceedings of the 9th International Workshop on Semantic Evaluation (SemEval 2015) 2015
DOI: 10.18653/v1/s15-2030
|View full text |Cite
|
Sign up to set email alerts
|

NeRoSim: A System for Measuring and Interpreting Semantic Textual Similarity

Abstract: We present in this paper our system developed for SemEval 2015 Shared Task 2 (2a -English Semantic Textual Similarity, STS, and 2c -Interpretable Similarity) and the results of the submitted runs. For the English STS subtask, we used regression models combining a wide array of features including semantic similarity scores obtained from various methods. One of our runs achieved weighted mean correlation score of 0.784 for sentence similarity subtask (i.e., English STS) and was ranked tenth among 74 runs submitt… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
18
0

Year Published

2016
2016
2020
2020

Publication Types

Select...
5
4

Relationship

2
7

Authors

Journals

citations
Cited by 26 publications
(18 citation statements)
references
References 18 publications
0
18
0
Order By: Relevance
“…• DTSim (Banjade et al, 2016): This team builds on the NeroSim system (Banjade et al, 2015), which participated in the 2015 task with good results using a system based on manual rules blended semantic similarity features. The team explored several chunking algorithms and included new rules.…”
Section: Systems Tools and Resourcesmentioning
confidence: 99%
“…• DTSim (Banjade et al, 2016): This team builds on the NeroSim system (Banjade et al, 2015), which participated in the 2015 task with good results using a system based on manual rules blended semantic similarity features. The team explored several chunking algorithms and included new rules.…”
Section: Systems Tools and Resourcesmentioning
confidence: 99%
“…We built upon a previous system called NeRoSim (Banjade et al, 2015). The limitation of their system was that the alignments were restricted to 1:1.…”
Section: Chunk Alignment Systemmentioning
confidence: 99%
“…For this, we compute the textual similarity or relation between extracted system use cases (taken as a query) and the regulations from the regulatory authority. We use SEMILAR API [1] to implement the similarity measurement techniques that assign each regulation a similarity score between 0 and 1 for a system use case. The regulations are then sorted on the basis of the similarity score assigned in decreasing order and top 5 regulations are extracted out from the regulations dataset.…”
Section: Automated Traceability Links Recoverymentioning
confidence: 99%
“…The Meteor evaluation metric scores regulations by aligning them to system use cases on the basis of exact, stemmed, synonymous, and paraphrase matches between words and phrases of text statements [1].…”
Section: C1: Meteormentioning
confidence: 99%