2018
DOI: 10.1007/978-3-030-05710-7_26
|View full text |Cite
|
Sign up to set email alerts
|

A Test Collection for Interactive Lifelog Retrieval

Abstract: There is a long history of repeatable and comparable evaluation in Information Retrieval (IR). However, thus far, no shared test collection exists that has been designed to support interactive lifelog retrieval. In this paper we introduce the LSC2018 collection, that is designed to evaluate the performance of interactive retrieval systems. We describe the features of the dataset and we report on the outcome of the first Lifelog Search Challenge (LSC), which used the dataset in an interactive competition at ACM… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
29
0

Year Published

2019
2019
2021
2021

Publication Types

Select...
4
3
1

Relationship

5
3

Authors

Journals

citations
Cited by 33 publications
(29 citation statements)
references
References 15 publications
0
29
0
Order By: Relevance
“…This part will also discuss the datasets [5,12] currently used for these campaigns. It will discuss the selection and preparation of large-scale datasets and methods of generating ground truth.…”
Section: Evaluation Campaignsmentioning
confidence: 99%
“…This part will also discuss the datasets [5,12] currently used for these campaigns. It will discuss the selection and preparation of large-scale datasets and methods of generating ground truth.…”
Section: Evaluation Campaignsmentioning
confidence: 99%
“…More recently, we note the introduction of a new challenge, specifically aimed at comparing approaches to interactive retrieval from lifelog archives. The Lifelog Search Challenge (LSC) [6] utilises a similar dataset [5] to the one used for the NTCIR14-Lifelog task. The LSC has occurred in 2018 and 2019 and attracted significant interest from participants.…”
Section: Related Interactive Lifelog Retrieval Systemsmentioning
confidence: 99%
“…These changes were combined with a slightly revised interface to take in to account the richer metadata and the content similarity functionality, as shown in Figures 4,5,and 6. In the interactive search competition at LSC2019, this system performed among the top-ranked teams with an overall score of 68, compared to the vitrivr system [19] which was given a score of 100. Interestingly the system significantly closed the gap to the NTCIR-14 system from HCMUS (which also competed at the LSC in 2019) who scored 72 in the competition.…”
Section: User Feedbackmentioning
confidence: 99%
“…Associated with the Task Intelligence workshop was a data challenge, for which interested participants could download a rich personal data archive of lifelog data annotated with real-world tasks/activities. For the data challenge, the dataset employed for the Lifelog Search Challenge (LSC) [2], a participating workshop at the 2018 ACM International Conference on Multimedia Retrieval (ICMR) was used. The dataset consisted of 27 days of rich multimodal lifelog data from one lifelogger organized into units of a minute duration.…”
Section: Task Intelligence Data Challengementioning
confidence: 99%