2013
DOI: 10.1145/2568388.2568393
|View full text |Cite
|
Sign up to set email alerts
|

Report on INEX 2013

Abstract: International audienceINEX investigates focused retrieval from structured documents by providing large test collections of structured documents, uniform evaluation measures, and a forum for organizations to compare their results. This paper reports on the INEX 2013 evaluation campaign, which consisted of four activities addressing three themes: searching professional and user generated data (Social Book Search track); searching structured or semantic data (Linked Data track); and focused retrieval (Snippet Re… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
6
0
1

Year Published

2015
2015
2020
2020

Publication Types

Select...
3
2
1

Relationship

0
6

Authors

Journals

citations
Cited by 9 publications
(7 citation statements)
references
References 5 publications
0
6
0
1
Order By: Relevance
“…stanford.edu/projects/glove/ 5 The data consists of an English Wikipedia dump from November 2012. It was released as a test collection for the INEX 2013 tweet contextualisation track [8] semantic network bins, 2 distances and 4 dimensional bins, we have 15 features per set of word embeddings, and 60 features in total per sentence pair, when, e.g., the 4 OoB sets are used.…”
Section: Feature Setsmentioning
confidence: 99%
“…stanford.edu/projects/glove/ 5 The data consists of an English Wikipedia dump from November 2012. It was released as a test collection for the INEX 2013 tweet contextualisation track [8] semantic network bins, 2 distances and 4 dimensional bins, we have 15 features per set of word embeddings, and 60 features in total per sentence pair, when, e.g., the 4 OoB sets are used.…”
Section: Feature Setsmentioning
confidence: 99%
“…For TextBenDS, we use as metric only the query response time. We note the response time for each query as t(Q i ) and t(Q i ) ∀i ∈ [1,4]. All queries Q 1 to Q 4 and Q 1 to Q 4 are executed 10 times for both top-k keywords and top-k documents, which is sufficient according to the central limit theorem.…”
Section: Performance Metrics and Execution Protocolmentioning
confidence: 99%
“…The general background of the INEX workshop is best summarized in INEX reports published within INEX workshops [24][25][26][27][28][29][30] and the SIGIR Forum [1,2,22], the biannual publication of the ACM Specific Interest Group on Information Retrieval (SIGIR). The evolution of the share of a http://pageperso.univ-lr.fr/antoine.doucet/structureExtraction/training.…”
Section: Related Publicationsmentioning
confidence: 99%