2012 10th International Workshop on Content-Based Multimedia Indexing (CBMI) 2012
DOI: 10.1109/cbmi.2012.6269799
|View full text |Cite
|
Sign up to set email alerts
|

Detecting complex events in user-generated video using concept classifiers

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
10
0

Year Published

2012
2012
2021
2021

Publication Types

Select...
5
2

Relationship

4
3

Authors

Journals

citations
Cited by 17 publications
(10 citation statements)
references
References 14 publications
0
10
0
Order By: Relevance
“…In this work, we still consider them as PPV. Therefore, compared with PPV, the APV has the following characteristics [7].…”
Section: Introductionmentioning
confidence: 99%
“…In this work, we still consider them as PPV. Therefore, compared with PPV, the APV has the following characteristics [7].…”
Section: Introductionmentioning
confidence: 99%
“…These time series recognition methods investigated are summarized in Table 1.1 [56]. Whether they utilise temporal features, the number of event/activity types and the number of concept attributes are all depicted in Max-Pooling [58] has been demonstrated to give better performance compared to other fusions for most complex events. In Max-Pooling, the maximum confidence is chosen from all keyframe images (or video subclips) for each concept to generate a fixeddimensional vector for an event or activity sample.…”
Section: Attribute-based Everyday Activity Recognitionmentioning
confidence: 99%
“…As one of the fusion operations for concept detection results, Max-Pooling [14] has been demonstrated to give better performance compared to other fusions for most complex events. In Max-Pooling, the maximum confidence is chosen from all keyframe images (or video subclips) for each concept to generate a fixed-dimensional vector for an event or activity sample.…”
Section: Max-pooling (Mp)mentioning
confidence: 99%