2014
DOI: 10.1007/978-3-319-13560-1_44
|View full text |Cite
|
Sign up to set email alerts
|

An Assessment of Online Semantic Annotators for the Keyword Extraction Task

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

1
15
0

Year Published

2015
2015
2019
2019

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 19 publications
(16 citation statements)
references
References 13 publications
1
15
0
Order By: Relevance
“…Specifically, in Table 1(a) we present the results obtained by comparing the keywords extracted to the top 15 keywords in the Crowd500 dataset, while the results obtained by considering all of the gold standard keywords are provided in Table 1(a). Regarding the first experiment, over the top 15 keywords, we note that in 3 out of 5 of the considered metrics (namely, NASARIE, UCI and UMASS), the F 1 score is higher than those reported in the paper by [6]. Also in the second experiment NASARIE, UCI and UMASS obtained highest F 1 score, whilst the results of NASARI and ttcs E are featured by the highest precision.…”
Section: Discussionmentioning
confidence: 71%
See 4 more Smart Citations
“…Specifically, in Table 1(a) we present the results obtained by comparing the keywords extracted to the top 15 keywords in the Crowd500 dataset, while the results obtained by considering all of the gold standard keywords are provided in Table 1(a). Regarding the first experiment, over the top 15 keywords, we note that in 3 out of 5 of the considered metrics (namely, NASARIE, UCI and UMASS), the F 1 score is higher than those reported in the paper by [6]. Also in the second experiment NASARIE, UCI and UMASS obtained highest F 1 score, whilst the results of NASARI and ttcs E are featured by the highest precision.…”
Section: Discussionmentioning
confidence: 71%
“…In the following, for the sake of self-containedness, we report the experimental results obtained by [6], where the authors performed a systematic assessment of an array of keyword extractors and online semantic annotators. In particular, we report the results obtained by 2 keyword extractors that participated in the "SemEval-2010 Task 5: Automatic Keyphrase Extraction from Scientific Articles" (namely, KP-Miner [3] and Maui [20]), and 5 semantic annotators (AlchemyAPI, Zemanta, OpenCalais, TagMe, and TextRazor 7 ).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations