2008
DOI: 10.1007/978-0-387-76569-3_6
|View full text |Cite
|
Sign up to set email alerts
|

High-Level Feature Detection from Video in TRECVid: A 5-Year Retrospective of Achievements

Abstract: Summary. *Successful and effective content-based access to digital video requires fast, accurate and scalable methods to determine the video content automatically. A variety of contemporary approaches to this rely on text taken from speech within the video, or on matching one video frame against others using low-level characteristics like colour, texture, or shapes, or on determining and matching objects appearing within the video. Possibly the most important technique, however, is one which determines the pre… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
100
0
1

Year Published

2011
2011
2019
2019

Publication Types

Select...
3
3
1

Relationship

2
5

Authors

Journals

citations
Cited by 120 publications
(106 citation statements)
references
References 13 publications
0
100
0
1
Order By: Relevance
“…This modern methodology facilitates an understanding of topic queries and low-level features by analysing the mapping in a semantic way. To build a largescale ontology and lexicon for semantic gap filling, large efforts have been made in activities like LSCOM (Large-Scale Concept Ontology for Multimedia), Naphade et al (2006); Kennedy and Hauptmann (2006), TRECVid, Smeaton et al (2009) and MediaMill's 101 concepts, Snoek et al (2006). Smeaton et al (2009) state that acceptable results have been achieved already within the TRECVid video retrieval evaluation framework for many cases particularly for concepts where there exists enough annotated training data.…”
Section: Annotating Lifelogs -Whatmentioning
confidence: 99%
See 1 more Smart Citation
“…This modern methodology facilitates an understanding of topic queries and low-level features by analysing the mapping in a semantic way. To build a largescale ontology and lexicon for semantic gap filling, large efforts have been made in activities like LSCOM (Large-Scale Concept Ontology for Multimedia), Naphade et al (2006); Kennedy and Hauptmann (2006), TRECVid, Smeaton et al (2009) and MediaMill's 101 concepts, Snoek et al (2006). Smeaton et al (2009) state that acceptable results have been achieved already within the TRECVid video retrieval evaluation framework for many cases particularly for concepts where there exists enough annotated training data.…”
Section: Annotating Lifelogs -Whatmentioning
confidence: 99%
“…To build a largescale ontology and lexicon for semantic gap filling, large efforts have been made in activities like LSCOM (Large-Scale Concept Ontology for Multimedia), Naphade et al (2006); Kennedy and Hauptmann (2006), TRECVid, Smeaton et al (2009) and MediaMill's 101 concepts, Snoek et al (2006). Smeaton et al (2009) state that acceptable results have been achieved already within the TRECVid video retrieval evaluation framework for many cases particularly for concepts where there exists enough annotated training data. Based on concept detection, encouraging improvement has been reported showing the efficiency and the effectiveness of concepts for higher level retrieval, Snoek et al (2006); Neo et al (2006).…”
Section: Annotating Lifelogs -Whatmentioning
confidence: 99%
“…High-level feature extraction results (average precision@2000 [6]) for both SIFT-based BoW features and features resulting from the concatenation of SIFT-and LIFT-based BoW are shown in Fig. 2.…”
Section: Resultsmentioning
confidence: 99%
“…Large scale video analysis for the purpose of high-level feature extraction, using local invariant features, is in most cases performed at the key-frame level [6]. Thus, the video analysis task reduces to still image analysis.…”
Section: Related Workmentioning
confidence: 99%
“…This requires searching through lifelogs based on content, and for this the automatic detection of semantic concepts is needed. The conventional approach to content-based indexing, as taken in the annual TRECVid benchmarking [11,12], is to annotate a collection covering both positive and negative examples of the presence of each concept and then to train a machine learning classifier to recognize the presence of the concept. This typically requires a classifier for each concept without considering inter-concept relationships or dependencies yet in reality, many concept pairs will co-occur rather than occur independently.…”
Section: Introductionmentioning
confidence: 99%