2018
DOI: 10.13053/cys-22-4-3077
|View full text |Cite
|
Sign up to set email alerts
|

Semi Supervised Graph Based Keyword Extraction Using Lexical Chains and Centrality Measures

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…Lexical chains describe sets of semantically related words. Lexical chains can be created using three steps: (1) select a set of candidate words, (2) determine a suitable chain by calculating the semantic relatedness among members of the chain, and (3) if a chain exists, add the word and update the chain; else, create a new chain to fit the word [54,55]. The second step can be performed using an existing database of synsets, such as the one included in the WordNet corpus [56].…”
Section: Keyword Extractionmentioning
confidence: 99%
See 1 more Smart Citation
“…Lexical chains describe sets of semantically related words. Lexical chains can be created using three steps: (1) select a set of candidate words, (2) determine a suitable chain by calculating the semantic relatedness among members of the chain, and (3) if a chain exists, add the word and update the chain; else, create a new chain to fit the word [54,55]. The second step can be performed using an existing database of synsets, such as the one included in the WordNet corpus [56].…”
Section: Keyword Extractionmentioning
confidence: 99%
“…The second step can be performed using an existing database of synsets, such as the one included in the WordNet corpus [56]. Lexical chains and graph centrality measures were also used for keyword extraction in [55,57].…”
Section: Keyword Extractionmentioning
confidence: 99%
“…Various subsequent approaches use variants of term occurrence measures with probabilities, such as χ2-test, log likelihood (Dunning, 1993) and mutual information (Church and Hanks, 1990), or attempt to combine statistical measures with various types of linguistic and stop-word filters, so as to refine the keyword results. Considerations regarding term ambiguity and variation also led to rule-based approaches (Jacquemin, 2001) and resource-based approaches exploiting existing thesauri and lexica, such as UMLS (Hliaoutakis et al, 2009), or Word-Net (Aggarwal et al, 2018). Knowledge poor statistical approaches, such as Latent Semantic Analysis (Deerwester et al, 1990) and Latent Dirichlet Allocation (Blei et al, 2003) attempt to detect document content in an unsupervised manner while reducing the dimensionality of the feature space of other bag-of-word approaches, but are also sensitive to sparse data and variation in short texts.…”
Section: Related Workmentioning
confidence: 99%