Proceedings of the 5th Annual on Lifelog Search Challenge 2022
DOI: 10.1145/3512729.3533012
|View full text |Cite
|
Sign up to set email alerts
|

E-Myscéal: Embedding-based Interactive Lifelog Retrieval System for LSC'22

Abstract: Developing interactive lifelog retrieval systems is a growing research area. There are many international competitions for lifelog retrieval that encourage researchers to build effective systems that can address the multimodal retrieval challenge of lifelogs. The Lifelog Search Challenge (LSC) was first organised in 2018 and is currently the only interactive benchmarking evaluation for lifelog retrieval systems. Participating systems should have an accurate search engine and a user-friendly interface that can … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
9
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
3
1

Relationship

3
4

Authors

Journals

citations
Cited by 32 publications
(19 citation statements)
references
References 31 publications
(35 reference statements)
0
9
0
Order By: Relevance
“…The wearable camera image data were processed and retrieved using the E-Myscéal web-based lifelog retrieval system. The E-Myscéal system uses deep learning algorithms to create embeddings (vector representations) of various lifelog data types such as images, text, and audio, enabling intuitive and web-based cross-media querying [ 22 ]. E-Myscéal uses the CLIP model to retrieve images similar in content to descriptive textual search terms, allowing users to search through lifelog data with great flexibility [ 27 ].…”
Section: Methodsmentioning
confidence: 99%
See 3 more Smart Citations
“…The wearable camera image data were processed and retrieved using the E-Myscéal web-based lifelog retrieval system. The E-Myscéal system uses deep learning algorithms to create embeddings (vector representations) of various lifelog data types such as images, text, and audio, enabling intuitive and web-based cross-media querying [ 22 ]. E-Myscéal uses the CLIP model to retrieve images similar in content to descriptive textual search terms, allowing users to search through lifelog data with great flexibility [ 27 ].…”
Section: Methodsmentioning
confidence: 99%
“…With E-Myscéal, users can enter any search terms, leading to a list of related images along with date or time and location metadata [ 22 ]. For instance, when “eating” was used as a search term, E-Myscéal retrieved all food-related camera images, enabling researchers to assess the frequency of the wearer’s food consumption per day or over multiple days (see Figure 1 ).…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…To show the stable performance of HADA, we used it to combine 2 other different pretrained models, including BLIP [17] and CLIP [28]. While CLIP is well-known for its application in many retrieval challenges [24,32,9,31], BLIP is the enhanced version of ALBEF with the bootstrapping technique in the training process. We used the same configuration as described in 4.2 to train and evaluate HADA in Flickr30k and MSCOCO datasets.…”
Section: Ablation Studymentioning
confidence: 99%