2013
DOI: 10.1007/s10115-013-0618-x
|View full text |Cite
|
Sign up to set email alerts
|

Combining compound and single terms under language model framework

Abstract: Most existing Information Retrieval model including probabilistic and vector space models are based on the term independence hypothesis. To go beyond this assumption and thereby capture the semantics of document and query more accurately, several works have incorporated phrases or other syntactic information in IR, such attempts have shown slight benefit, at best. Particularly in language modeling approaches this extension is achieved through the use of the bigram or n-gram models. However, in these models all… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
4
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
3
2

Relationship

1
4

Authors

Journals

citations
Cited by 6 publications
(4 citation statements)
references
References 34 publications
(61 reference statements)
0
4
0
Order By: Relevance
“…We think that it can be further enhanced by considering frequent subconcepts as the approach proposed in Hammache et al. (). Our approach of recognizing concepts is somehow limited by order because we extract first the collocations that vary from one to three words. We can overcome this limitation by relaxing concept word adjacency and recognizing concepts in a larger context such as a text passage. The number of trials for this experiment are the total number of queries in the WSJ data set (49) plus the total number from the AP data set (47).…”
Section: Discussionmentioning
confidence: 99%
See 3 more Smart Citations
“…We think that it can be further enhanced by considering frequent subconcepts as the approach proposed in Hammache et al. (). Our approach of recognizing concepts is somehow limited by order because we extract first the collocations that vary from one to three words. We can overcome this limitation by relaxing concept word adjacency and recognizing concepts in a larger context such as a text passage. The number of trials for this experiment are the total number of queries in the WSJ data set (49) plus the total number from the AP data set (47).…”
Section: Discussionmentioning
confidence: 99%
“…() and Hammache et al. (), in the same way, estimate concept weights. The intuition is that the authors tend to use subconcepts to refer to a given concept they have previously used in the document.…”
Section: Concept‐based Language Modelmentioning
confidence: 99%
See 2 more Smart Citations