Proceedings of the Ninth ACM International Conference on Web Search and Data Mining 2016
DOI: 10.1145/2835776.2835801
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Documents Relatedness using Concept Graph Representation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
40
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
6
1
1

Relationship

0
8

Authors

Journals

citations
Cited by 44 publications
(40 citation statements)
references
References 12 publications
0
40
0
Order By: Relevance
“…This novel representation has recently allowed researchers to design new algorithms that significantly boost the performance of known approaches in several IR applications, such as query understanding, documents clustering and classification, text mining, etc. [15,17,36,39,43,52].…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…This novel representation has recently allowed researchers to design new algorithms that significantly boost the performance of known approaches in several IR applications, such as query understanding, documents clustering and classification, text mining, etc. [15,17,36,39,43,52].…”
Section: Related Workmentioning
confidence: 99%
“…Word embeddings [30] is a very recent Natural Language Processing (NLP) technique that aims at mapping words or phrases to low dimensional numerical vectors that are faster to manipulate and offer interesting distributional properties to compare and retrieve "similar" words or phrases [30]. This latent representation has been recently extended [37,38] to learn two different forms of representations of Wikipedia entities [39]: (1) ENTITY2VEC [37] learns the latent representation of entities by working at textuallevel over the content of Wikipedia pages, and (2) DEEPWALK [38] learns the latent representation of entities by working on the hyper-link structure of the Wikipedia graph via the execution of random walks that start from a focus node (i.e. the entity to be embedded) and visit other nearby nodes (that provide its contextual knowledge).…”
Section: Related Workmentioning
confidence: 99%
“…E2V utilizes textual information to capture latent word relationships. Similar to Zhao et al (2015); Ni et al (2016), we use Wikipedia articles as training corpus to learn word vectors and reserved hyperlinks between entities.…”
Section: Models For Comparisonmentioning
confidence: 99%
“…In contrast to the approaches outlined above which assigns weights to concepts of an ontology, the approach presented by Ni et al [107] concerns with assigning weights to concepts acquired from the knowledge base resource known as DBpedia [13]. The approach employs two assignment methods to compute and assign weights to a concept: a global method and a local method.…”
Section: Weighting Scheme -Concept Importancementioning
confidence: 99%