Lecture Notes in Computer Science
DOI: 10.1007/978-3-540-78646-7_33
|View full text |Cite
|
Sign up to set email alerts
|

Extending Probabilistic Data Fusion Using Sliding Windows

Abstract: Recent developments in the field of data fusion have seen a focus on techniques that use training queries to estimate the probability that various documents are relevant to a given query and use that information to assign scores to those documents on which they are subsequently ranked. This paper introduces SlideFuse, which builds on these techniques, introducing a sliding window in order to compensate for situations where little relevance information is available to aid in the estimation of probabilities. S… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
14
0

Publication Types

Select...
5
2

Relationship

1
6

Authors

Journals

citations
Cited by 15 publications
(14 citation statements)
references
References 20 publications
(21 reference statements)
0
14
0
Order By: Relevance
“…• Precision at n documents (P@n) [3,[5][6][7]9,11,12,14,15,17,18]. • Average Precision (AP) or Mean Average Precision (MAP) [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. • Precision/Recall curves [4,10,12].…”
Section: A Use Of Standard Ir Metricsmentioning
confidence: 99%
See 3 more Smart Citations
“…• Precision at n documents (P@n) [3,[5][6][7]9,11,12,14,15,17,18]. • Average Precision (AP) or Mean Average Precision (MAP) [2][3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18]. • Precision/Recall curves [4,10,12].…”
Section: A Use Of Standard Ir Metricsmentioning
confidence: 99%
“…In addition to comparing a proposed fusion algorithm with the component systems' outputs, evaluation typically also includes a comparison with competing data fusion algorithms. This can be seen in [2,[4][5][6][7][8][9][10][11][12][13][14][15][15][16][17]20].…”
Section: Comparison With Other Fusion Algorithmsmentioning
confidence: 99%
See 2 more Smart Citations
“…Unsupervised fusion methods such as BordaFuse [3], Condorect Fusion [17] and CombMNZ [7] heuristically determine such a score based merely on the evidences provided by the search results like similarity, rank, and the number of the models retrieving the document. Supervised methods, for example, SegFuse [20], SlideFuse [14], PosFuse [13] and PLQA [23], get a better approximation of the score based on the given training data, which shows information about how well the models performed for some given queries before. With such training data, supervised methods can further estimate the quality of the evidences such as the accuracy of a rank position for a specific model [13] and the weight of a model [15], and then adjust the ranking score accordingly.…”
Section: Introductionmentioning
confidence: 99%