2007 IEEE Conference on Advanced Video and Signal Based Surveillance 2007
DOI: 10.1109/avss.2007.4425357
|View full text |Cite
|
Sign up to set email alerts
|

ETISEO, performance evaluation for video surveillance systems

Abstract: Abstract

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
73
0
1

Year Published

2009
2009
2023
2023

Publication Types

Select...
3
2
2

Relationship

0
7

Authors

Journals

citations
Cited by 108 publications
(74 citation statements)
references
References 2 publications
0
73
0
1
Order By: Relevance
“…An example is ontology as defined in the Etiseo project [12] or the result of the "Challenge Project on Video Event Taxonomy" sponsored by the Advanced Research and Development Activity (ARDA) [26]. In [27], a Video Event Representation Language (VERL) is presented which describes an event ontology, associated with Video Event Markup Language (VEML) for event instance annotation.…”
Section: Operationmentioning
confidence: 99%
See 2 more Smart Citations
“…An example is ontology as defined in the Etiseo project [12] or the result of the "Challenge Project on Video Event Taxonomy" sponsored by the Advanced Research and Development Activity (ARDA) [26]. In [27], a Video Event Representation Language (VERL) is presented which describes an event ontology, associated with Video Event Markup Language (VEML) for event instance annotation.…”
Section: Operationmentioning
confidence: 99%
“…Each video and annotation are cataloged together with a set of metadata, such as author, creation date and description. The ontology described in section 4 is also stored in the database by means of three tables: the "Descriptor" by ViSOR is ViPER XML [8], developed at the University of Maryland, since it satisfies several requirements: it is flexible, the list of concepts is customizable and it is widespread (e.g., it is used by Pets [9] and Etiseo [12]). Kasturi et al in their very recent work [29] on performance evaluation adopted the ViPER format as well.…”
Section: Case Study 3 -Action Recognitionmentioning
confidence: 99%
See 1 more Smart Citation
“…The third video set is called ETISEO [13]. Amongst all the videos, we selected the video recorded in the central hall of a metro station for its challenging occlusions (see Fig.…”
Section: Video Setsmentioning
confidence: 99%
“…Instead, modelling the matching case is possible by identifying the features' expected range of variations over single targets. To this aim, we conducted a major preliminary experiment by manually annotating the target and its parts in each frame of several videos from various datasets [13,15,1]. In detail, we have manually annotated 20 video sequences over different scenarios such as a corridor, a train station, a shopping mall, and various outdoor scenes, each comprising between 1,000 and 4,000 frames [13, 15, 1] for a total of over 54,000 frames.…”
Section: The Parts Modelmentioning
confidence: 99%