Proceedings of the Third Annual Workshop on Lifelog Search Challenge 2020
DOI: 10.1145/3379172.3391717
|View full text |Cite
|
Sign up to set email alerts
|

LifeGraph: A Knowledge Graph for Lifelogs

Abstract: The data produced by efforts such as life logging is commonly multi modal and can have manifold interrelations with itself as well as external information. Representing this data in such a way that these rich relations as well as all the different sources can be leveraged is a non-trivial undertaking. In this paper, we present the first iteration of LifeGraph, a Knowledge Graph for lifelogging data. Life-Graph aims at not only capturing all aspects of the data contained in a lifelog but also linking them to ex… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
13
0

Year Published

2020
2020
2023
2023

Publication Types

Select...
4
2
2

Relationship

1
7

Authors

Journals

citations
Cited by 16 publications
(13 citation statements)
references
References 15 publications
0
13
0
Order By: Relevance
“…Several systems have tried to address the issues regarding semantic gap between query and images, and poor contextual understanding of the data. FIRST [25] uses an autoencoder like approach to map query text and images into a common semantic space to measure the similarity between them, LifeGraph [23] used a knowledge graph to represent the lifelog data to capture the internal relations of the various data modalities and linked it to external static data sources for better semantic understanding. Chu et al [6] extracted relation graphs from lifelog images to better describe the relationship between entities (subject-object) present within the image.…”
Section: Related Workmentioning
confidence: 99%
“…Several systems have tried to address the issues regarding semantic gap between query and images, and poor contextual understanding of the data. FIRST [25] uses an autoencoder like approach to map query text and images into a common semantic space to measure the similarity between them, LifeGraph [23] used a knowledge graph to represent the lifelog data to capture the internal relations of the various data modalities and linked it to external static data sources for better semantic understanding. Chu et al [6] extracted relation graphs from lifelog images to better describe the relationship between entities (subject-object) present within the image.…”
Section: Related Workmentioning
confidence: 99%
“…Embedding techniques are also commonly based on the idea of encoding concepts from both queries and images tags into the same vector space to calculate the similarity between them [19,20]. Regarding using graphs, LifeGraph [25] applied knowledge graph structure with the nodes representing detected things or scenes recognized in images. These entities can be linked with corresponding images and external sources to expand the information with frequent activities and relevant objects.…”
Section: Related Workmentioning
confidence: 99%
“…This idea was adopted by almost the half of the participants in the LSC'20 [17,20,26,27,39]. On the other hand, SOMHunter [27] which was the runner-up system of the competition and Exquisitor [17] heavily relied on user's relevance feedback, while LifeGraph [33] explored the potential of indexing the lifelog into a graph structure [30].…”
Section: Related Workmentioning
confidence: 99%