2011
DOI: 10.1007/978-3-642-23577-1_9
|View full text |Cite
|
Sign up to set email alerts
|

Overview of the INEX 2010 Book Track: Scaling Up the Evaluation Using Crowdsourcing

Abstract: Abstract. The goal of the INEX Book Track is to evaluate approaches for supporting users in searching, navigating and reading the full texts of digitized books. The investigation is focused around four tasks: 1) Best Books to Reference, 2) Prove It, 3) Structure Extraction, and 4) Active Reading. In this paper, we report on the setup and the results of these tasks in 2010. The main outcome of the track lies in the changes to the methodology for constructing the test collection for the evaluation of the Best Bo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
13
0

Year Published

2011
2011
2019
2019

Publication Types

Select...
5
2
1

Relationship

0
8

Authors

Journals

citations
Cited by 16 publications
(13 citation statements)
references
References 6 publications
(6 reference statements)
0
13
0
Order By: Relevance
“…In this section, we will briefly discuss the BB and PI tasks. Further details on all four tasks are available in [8].…”
Section: Aims and Tasksmentioning
confidence: 99%
“…In this section, we will briefly discuss the BB and PI tasks. Further details on all four tasks are available in [8].…”
Section: Aims and Tasksmentioning
confidence: 99%
“…Querying using the 4905 ISBNs other well known sources, including Abebooks 15 , Amazon book 16 , ISBNSearch 17 , and BookFinder4U 18 , we retrieved 814, 1275, 1170, 1301 books respectively while Google Books returned 4329 books with valid title and authors and which covered all others. The first four pages of these 4329 books are then extracted line by line and represented by features shown in Table 1.…”
Section: Metadata Extractionmentioning
confidence: 99%
“…However, the evaluation of search and retrieval over large book repositories is still a difficult task. G. Kazai et al used crowdsourcing for book search evaluation and found that well designed crowdsourcing can be an effective tool for the evaluation of book IR systems [15,16]. A similar problem has also been tested in the social book search task [17].…”
Section: Book Search and Retrievalmentioning
confidence: 99%
“…For crowdsourcing experiments we use the dataset from Task 2 of the TREC2011 Crowdsourcing Track 1 [17]. The dataset is a collection of binary (relevant or not relevant) judgements from 762 workers for 19,033 documents.…”
Section: Crowdsourcing Experimentsmentioning
confidence: 99%
“…This transformation makes it possible to apply any learningto-rank method to optimize the parameters of the aggregating function for the target metric. Experiments on crowdsourcing task from TREC2011 [17] and meta-search tasks with Microsoft's LETOR4.0 [20] data sets show that our model significantly outperforms existing aggregation methods.…”
Section: Introductionmentioning
confidence: 99%