Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval 2016
DOI: 10.1145/2911451.2911498
|View full text |Cite
|
Sign up to set email alerts
|

Modeling Document Novelty with Neural Tensor Network for Search Result Diversification

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
38
0

Year Published

2017
2017
2022
2022

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 50 publications
(38 citation statements)
references
References 29 publications
0
38
0
Order By: Relevance
“…These prior methods either use a heuristic ranking model based on a predefined document similarity function, or they automatically learn a ranking model from predefined novelty features often based on cosine similarity. In contrast, Xia et al (2016) take automation a step further, using Neural Tensor Networks (NTN) to learn the novelty features themselves. The NTN architecture was first proposed to model the relationship between entities in a knowledge graph via a bilinear tensor product (Socher et al 2013).…”
Section: Learnmentioning
confidence: 99%
See 1 more Smart Citation
“…These prior methods either use a heuristic ranking model based on a predefined document similarity function, or they automatically learn a ranking model from predefined novelty features often based on cosine similarity. In contrast, Xia et al (2016) take automation a step further, using Neural Tensor Networks (NTN) to learn the novelty features themselves. The NTN architecture was first proposed to model the relationship between entities in a knowledge graph via a bilinear tensor product (Socher et al 2013).…”
Section: Learnmentioning
confidence: 99%
“…An additional inference step, which requires including the representations of the entire collection, is required for obtaining the embedding of an unobserved TTU. While PV (the first Learn to predict model) has been adopted in many studies (e.g., Xia et al 2016 use PV-DBOW to represent documents) and is often reported as a baseline (e.g., Tai et al 2015), concerns about reproducibility have also been raised. Kiros et al (2015) report results below SVM when re-implementing PV.…”
Section: How To Choose the Appropriate Category Of Learn?mentioning
confidence: 99%
“…He et al [18] propose a result diversification framework based on query-specific clustering and cluster ranking, in which diversification is restricted to documents belonging to clusters that potentially contain a high percentage of relevant documents. More recent implicit work includes set-based recommendation of diverse articles [1], term-level diversification [14], diversified data fusion [26], and neural-network-based diversification model [46]. Abbar et al [1] address the problem of providing diverse news recommendations related to an input article by leveraging user-generated data to refine lists of related articles.…”
Section: Search Results Diversificationmentioning
confidence: 99%
“…Liang et al [26] start from the hypothesis that data fusion can improve performance in terms of diversity metrics, examine the impact of standard data fusion methods on search result diversification, and propose a diversified data fusion algorithm to infer latent topics of a query using topic modeling model for diversification. Xia et al [46] propose to model the novelty of a document with a neural tensor network and learn a nonlinear novelty function based on the preliminary representation of the candidate document and other documents for diversification.…”
Section: Search Results Diversificationmentioning
confidence: 99%
See 1 more Smart Citation