The Lifelog Search Challenge (LSC) is an international content retrieval competition that evaluates search for personal lifelog data. At the LSC, content-based search is performed over a multi-modal dataset, continuously recorded by a lifelogger over 27 days, consisting of multimedia content, biometric data, human activity data, and information activities data. In this work, we report on the first LSC that took place in Yokohama, Japan in 2018 as a special workshop at ACM International Conference on Multimedia Retrieval 2018 (ICMR 2018). We describe the general idea of this challenge, summarise the participating search systems as well as the evaluation procedure, and analyse the search performance of the teams in various aspects. We try to identify reasons why some systems performed better than others and provide an outlook as well as open issues for upcoming iterations of the challenge.
There is a long history of repeatable and comparable evaluation in Information Retrieval (IR). However, thus far, no shared test collection exists that has been designed to support interactive lifelog retrieval. In this paper we introduce the LSC2018 collection, that is designed to evaluate the performance of interactive retrieval systems. We describe the features of the dataset and we report on the outcome of the first Lifelog Search Challenge (LSC), which used the dataset in an interactive competition at ACM ICMR 2018.
This paper presents an overview of the ImageCLEF 2018 evaluation campaign, an event that was organized as part of the CLEF (Conference and Labs of the Evaluation Forum) Labs 2018. ImageCLEF is an ongoing initiative (it started in 2003) that promotes the evaluation of technologies for annotation, indexing and retrieval with the aim of providing information access to collections of images in various usage scenarios and domains. In 2018, the 16th edition of ImageCLEF ran three main tasks and a pilot task: (1) a caption prediction task that aims at predicting the caption of a figure from the biomedical literature based only on the figure image; (2) a tuberculosis task that aims at detecting the tuberculosis type, severity and drug resistance from CT (Computed Tomography) volumes of the lung; (3) a LifeLog task (videos, images and other sources) about daily activities understanding and moment retrieval, and (4) a pilot task on visual question answering where systems are tasked with answering medical questions. The strong participation, with over 100 research groups registering and 31 submitting results for the tasks, shows an increasing interest in this benchmarking campaign.
For the fifth time since 2018, the Lifelog Search Challenge (LSC) facilitated a benchmarking exercise to compare interactive search systems designed for multimodal lifelogs. LSC'22 attracted nine participating research groups who developed interactive lifelog retrieval systems enabling fast and effective access to lifelogs. The systems competed in front of a hybrid audience at the LSC workshop at ACM ICMR'22. This paper presents an introduction to the LSC workshop, the new (larger) dataset used in the competition, and introduces the participating lifelog search systems.
In this paper we present our interactive lifelog retrieval engine in the LSC'20 comparative benchmarking challenge. The LifeSeeker 2.0 interactive lifelog retrieval engine is developed by both Dublin City University and Ho Chi Minh University of Science, which represents an enhanced version of the two corresponding interactive lifelog retrieval engines in LSC'19. The implementation of LifeSeeker 2.0 has been designed to focus on the searching by text query using a Bag-of-Words model with visual concept augmentation and additional improvements in query processing time, enhanced result display and browsing support, and interacting with visual graphs for both query and filter purposes. CCS CONCEPTS • Information systems → Multimedia databases; Users and interactive retrieval; Search interfaces; • Human-centered computing → Interactive systems and tools.
Test collections have a long history of supporting repeatable and comparable evaluation in Information Retrieval (IR). However, thus far, no shared test collection exists for IR systems that are designed to index and retrieve multimodal lifelog data. In this paper we introduce the first test collection for personal lifelog data, which has been employed for the NTCIR12-Lifelog task. In this paper, the requirements for the test collection are motivated, the process of creating the test collection is described, along with an overview of the test collection. Finally suggestions are given for possible applications of the test collection.
Lifelogging refers to the process of digitally capturing a continuous and detailed trace of life activities in a passive manner. In order to assist the research community to make progress in the organisation and retrieval of data from lifelog archives, a lifelog task was organised at NTCIR since edition 12. Lifelog-3 was the third running of the lifelog task (at NTCIR-14) and the Lifelog-3 task explored three different lifelog data access related challenges, the search challenge, the annotation challenge and the insights challenge. In this paper we review the dataset created for this activity, activities of participating teams who took part in these challenges and we highlight learnings for the community from the NTCIR-Lifelog challenges.
Developing interactive lifelog retrieval systems is a growing research area. There are many international competitions for lifelog retrieval that encourage researchers to build effective systems that can address the multimodal retrieval challenge of lifelogs. The Lifelog Search Challenge (LSC) was first organised in 2018 and is currently the only interactive benchmarking evaluation for lifelog retrieval systems. Participating systems should have an accurate search engine and a user-friendly interface that can help users to retrieve relevant content. In this paper, we upgrade our previous Myscéal, which was the top performing system in LSC'20 and LSC'21, and present E-Myscéal for LSC'22, which includes a completely different search engine. Instead of using visual concepts for retrieval such as Myscéal, the new E-Myscéal employs an embedding technique that facilitates novice users who are not familiar with the concepts. Our experiments show that the new search engine can find relevant images in the first place in the ranked list, four a quarter of the LSC'21 queries (26%) by using just the first hint from the textual information need. Regarding the user interface, we still keep the simple non-faceted design as in the previous version but improve the event view browsing in order to better support novice users. CCS CONCEPTS• Information systems → Information retrieval; • Humancentered computing → Human computer interaction (HCI); User interface design.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.