2013
DOI: 10.1007/s13735-013-0050-8
|View full text |Cite
|
Sign up to set email alerts
|

The Video Browser Showdown: a live evaluation of interactive video search tools

Abstract: The Video Browser Showdown evaluates the performance of exploratory video search tools on a common data set in a common environment and in presence of the audience. The main goal of this competition is to enable researchers in the field of interactive video search to directly compare their tools at work. In this paper we present results from the second Video Browser Showdown (VBS2013) and describe and evaluate the tools of all participating teams in detail. The evaluation results give insights on how explorato… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
18
0

Year Published

2014
2014
2017
2017

Publication Types

Select...
3
3
2

Relationship

5
3

Authors

Journals

citations
Cited by 17 publications
(20 citation statements)
references
References 36 publications
1
18
0
Order By: Relevance
“…These segments can be shots, scenes, events or any other logical unit. In some cases, the segments may even be query dependent, such as in the TRECVid Multimedia Event Detection task or in the MMM Video Browser Showdown competition as described in Schoeffmann et al (2013).…”
Section: Identifying Eventsmentioning
confidence: 99%
“…These segments can be shots, scenes, events or any other logical unit. In some cases, the segments may even be query dependent, such as in the TRECVid Multimedia Event Detection task or in the MMM Video Browser Showdown competition as described in Schoeffmann et al (2013).…”
Section: Identifying Eventsmentioning
confidence: 99%
“…We introduced the de-facto standard evaluation protocol that is applied for scientific performance evaluation. Furthermore, we introduced popular Academic evaluation campaigns, namely the Known-Item search task promoted by TRECVid [21], the Video Browser Showdown [19] which has been organized as part of the Multimedia Modeling Conference, and the Personal Lifelog Access & Retrieval Task NTCIR-Lifelog [9] that is organized as part of the Japanese conference series on the Evaluation of Information Access Systems (NTCIR). The aim of this task is to begin the comparative evaluation of information access and retrieval systems operating over personal lifelog data.…”
Section: Objectivesmentioning
confidence: 99%
“…Unfortunately though, most user studies lack of a large user base which would be required to confirm research hypotheses. Hence, addressing this shortcoming, various methodologies have been suggested such as user simulation [3] or the evaluation of systems in a playful scenario [4]. Although these approaches can be used for "fine-tuning" of algorithms [5] or evaluation in a competitive environment, the artificial nature of this experimental setup casts some doubt on to which degree these findings can be generalised.…”
Section: Introductionmentioning
confidence: 99%