2013
DOI: 10.1002/asi.22789
|View full text |Cite
|
Sign up to set email alerts
|

Learning to rank using smoothing methods for language modeling

Abstract: The central issue in language model estimation is smoothing, which is a technique for avoiding zero probability estimation problem and overcoming data sparsity. There are three representative smoothing methods: Jelinek‐Mercer (JM) method; Bayesian smoothing using Dirichlet priors (Dir) method; and absolute discounting (Dis) method, whose parameters are usually estimated empirically. Previous research in information retrieval (IR) on smoothing parameter estimation tends to select a single value from optional va… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
2
0

Year Published

2014
2014
2022
2022

Publication Types

Select...
5

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(2 citation statements)
references
References 29 publications
(35 reference statements)
0
2
0
Order By: Relevance
“…Greedy RankRLS [45] selects the feature subset of the maximal ranking performance for RankRLS [46] based on greedy forward selection and leave-query-out cross-validation. In [39], language modeling smoothing approaches with different parameters were proposed for selecting the ranking features. [16] considered a multi-objective Pareto-efficient method that optimizes both risk-sensitive evaluation and ranking performance.…”
Section: Feature Selection Methods For Ltrmentioning
confidence: 99%
“…Greedy RankRLS [45] selects the feature subset of the maximal ranking performance for RankRLS [46] based on greedy forward selection and leave-query-out cross-validation. In [39], language modeling smoothing approaches with different parameters were proposed for selecting the ranking features. [16] considered a multi-objective Pareto-efficient method that optimizes both risk-sensitive evaluation and ranking performance.…”
Section: Feature Selection Methods For Ltrmentioning
confidence: 99%
“…the ranking accuracy) of the selected features. Example algorithms include BRTree, which uses boosted regression trees [20], RankWrapper with Ranking SVM [21], BFS-Wrapper utilizing search [22], GreedyRankRLS with Rank RLS algorithm [23], and LMIR using smoothing language model [31].…”
Section: Feature Selection Methods For Learning To Rankmentioning
confidence: 99%