Proceedings of the 2019 on International Conference on Multimedia Retrieval 2019
DOI: 10.1145/3323873.3326588
|View full text |Cite
|
Sign up to set email alerts
|

Interactive Video Retrieval in the Age of Deep Learning

Abstract: We present a tutorial focusing on video retrieval tasks, where stateof-the-art deep learning approaches still benefit from interactive decisions of users. The tutorial covers general introduction to the interactive video retrieval research area, state-of-the-art video retrieval systems, evaluation campaigns and recently observed results. Moreover, a significant part of the tutorial is dedicated to a practical exercise with three selected state-of-the-art systems in the form of an interactive video retrieval co… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 13 publications
(15 reference statements)
0
3
0
Order By: Relevance
“…Each cell in the table represents the fraction #agreement #disagreement , where #agreement and #disagreement represent the number of identical submitted shots judged as correct and wrong, respectively. For instance, in task a2, the red cell showing [70], significant disagreement appears in more tasks, i.e., task a2 and a3. In task a2, seven teams disagreed with the judgment on one video, while four teams disagreed on one video in task a3.…”
Section: Task Hint A1mentioning
confidence: 99%
See 1 more Smart Citation
“…Each cell in the table represents the fraction #agreement #disagreement , where #agreement and #disagreement represent the number of identical submitted shots judged as correct and wrong, respectively. For instance, in task a2, the red cell showing [70], significant disagreement appears in more tasks, i.e., task a2 and a3. In task a2, seven teams disagreed with the judgment on one video, while four teams disagreed on one video in task a3.…”
Section: Task Hint A1mentioning
confidence: 99%
“…Teams receive scores for the first correct submission of each video, and a penalty is added for wrong submissions to prevent the submission of unverified shots. The score f t of a team t is determined as in [70]:…”
Section: Competition Setupmentioning
confidence: 99%
“…Those models are able to accurately predict labels with few (few-shot) or none (zero-shot) labeled examples. While these models already have been shown to outperform other approaches in interactive video retrieval (Lokoč et al, 2023), their potential in video recommendation remains largely unexplored. Future research could focus on applications in recommendation systems where historical interaction data is limited or absent, potentially improving cold start scenarios.…”
Section: Few-shot and Zero-shot Video Recommendationmentioning
confidence: 99%
“…Based on data analysis from two benchmark editions (2021 and 2022) published in [3,4], we perform an analysis of the agreement between assessors and the participating teams. In 2021, only a discussion of queries was performed, while in 2022, the process described in this paper was introduced.…”
Section: Agreement Between Teams and Assessorsmentioning
confidence: 99%