2016 2nd IEEE International Conference on Computer and Communications (ICCC) 2016
DOI: 10.1109/compcomm.2016.7925072
|View full text |Cite
|
Sign up to set email alerts
|

Research on keyword extraction based on Word2Vec weighted TextRank

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
2
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 10 publications
(4 citation statements)
references
References 1 publication
0
2
0
Order By: Relevance
“…In 2019, Li et al [42], they proposed a model where the performance for it was still good and mostly stable with respect to the F-measure, and from the curve of this measure, when the number of extracted keywords N was 7, the F-measure reached a maximum of 43.1% compared to Xia's work. [43], in which, the basic idea of TextRank used for keyword extraction was introduced. The process of constructing candidate keywords and the F-measure up to 37.28%, and all these previous results were less than our results, where F-measures reached 76.75%.…”
Section: Discussionmentioning
confidence: 99%
“…In 2019, Li et al [42], they proposed a model where the performance for it was still good and mostly stable with respect to the F-measure, and from the curve of this measure, when the number of extracted keywords N was 7, the F-measure reached a maximum of 43.1% compared to Xia's work. [43], in which, the basic idea of TextRank used for keyword extraction was introduced. The process of constructing candidate keywords and the F-measure up to 37.28%, and all these previous results were less than our results, where F-measures reached 76.75%.…”
Section: Discussionmentioning
confidence: 99%
“…The text-based semantic model we adopted as a control is word2vec (Mikolov, Chen, et al, 2013), an efficient word representation algorithm which has gathered vast success in computational linguistics (e.g., Baroni et al, 2014;Wen et al, 2016;Xiao et al, 2018) and is also widely employed in cognitive science (see for instance Hollenstein et al, 2019;Mandera et al, 2017;Mitchell et al, 2008). In particular, we employed the word embeddings generated by the Skip-Gram with Negative Sampling (SGNS) model, which is trained on large corpora of natural language data to predict context words within a certain window based on the target item.…”
Section: Semanticscape Modelmentioning
confidence: 99%
“…Keywords are effective terms for bibliometric analysis that investigate the knowledge structure of scientific fields, but are less exhaustive in representing the content of an article [71]. Therefore, the behavior of these terms with time needs to be analyzed.…”
Section: Figure 11 Word Treemapmentioning
confidence: 99%