2014
DOI: 10.1007/978-3-319-11382-1_19
|View full text |Cite
|
Sign up to set email alerts
|

Overview of INEX 2014

Abstract: Abstract. INEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2014 evaluation campaign, which consisted of three tracks: The Interactive Social Book Search Track investigated user information seeking behavior when interacting with various sources of information, for realistic task scenarios, and how the user interface impacts … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2015
2015
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 14 publications
(14 citation statements)
references
References 6 publications
(8 reference statements)
0
14
0
Order By: Relevance
“…Probably the closest of which is the relatively unexplored task of Evidence Retrieval (ER) (Cartright et al, 2011;Bellot et al, 2013). However, while ER focus is on identifying whole documents, in CDED the goal is to pinpoint a typically much shorter text segment which can be used directly to support a claim.…”
Section: Related Workmentioning
confidence: 99%
“…Probably the closest of which is the relatively unexplored task of Evidence Retrieval (ER) (Cartright et al, 2011;Bellot et al, 2013). However, while ER focus is on identifying whole documents, in CDED the goal is to pinpoint a typically much shorter text segment which can be used directly to support a claim.…”
Section: Related Workmentioning
confidence: 99%
“…In 2013 the informativeness was estimated as the overlap of a summary with 3 pools of relevant passages: (1) prior set (PRIOR) of relevant pages selected by organizers (40 tweets, 380 passages); (2) pool selection (POOL) of the most relevant passages (1 760) from participant submissions for 45 selected tweets; and (3) all relevant texts (ALL) merged together with extra passages from a random pool of 10 tweets (70 tweets, 2 378 relevant passages) [2]. The system was evaluated with three parameter sets.…”
Section: Discussionmentioning
confidence: 99%
“…Informativeness was estimated as the lexical overlap (uni, big and skip representing the proportion of shared unigrams, bigrams and bigrams with gaps of two tokens respectively) of a summary with the pool of relevant passages extracted from the runs submitted by all participants [2]. Official ranking was based on decreasing score of divergence with the gold standard estimated by skip:…”
Section: Discussionmentioning
confidence: 99%
See 1 more Smart Citation
“…The overall goal of the Social Book Search lab at CLEF (Conference and Labs of the Evaluation Forum) in 2014 [1] and 2015 [6] was to investigate how professional metadata (title, authors, ...) can be combined with social meta-data (tags, reviews) to satisfy an information need. Within this the Interactive Social Book Search Task (iSBS) looked at how the two types of meta-data can be combined in the search interface and in what way users make use of the two meta-data searches when interacting with the interface to complete a task.…”
Section: Interactive Social Book Search (Isbs)mentioning
confidence: 99%