The platform will undergo maintenance on Sep 14 at about 7:45 AM EST and will be unavailable for approximately 2 hours.
2008
DOI: 10.1109/icassp.2008.4517714
|View full text |Cite
|
Sign up to set email alerts
|

Performance evaluation for tracking algorithms using object labels

Abstract: This paper proposes performance measures to evaluate object tracking algorithms using object labels and sizes.. The usefulness and effectiveness of the proposed evaluation measures are shown by reporting the performance evaluation of two tracking algorithms. We then compare the application of the proposed evaluation measures with related work to demonstrate that they are more suitable.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2009
2009
2021
2021

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(3 citation statements)
references
References 3 publications
0
3
0
Order By: Relevance
“…Most papers in their experimental evaluation use a limited number of videos. For example, only six videos are being used in [24]. And, the often used BoBoT dataset, tested in [25], consists of ten different video sequences.…”
Section: Data For Tracker Evaluationmentioning
confidence: 99%
See 1 more Smart Citation
“…Most papers in their experimental evaluation use a limited number of videos. For example, only six videos are being used in [24]. And, the often used BoBoT dataset, tested in [25], consists of ten different video sequences.…”
Section: Data For Tracker Evaluationmentioning
confidence: 99%
“…The Performance Evaluation of Tracking and Surveillance, PETS, workshop series was one of the first to evaluate trackers with ground truth, proposing performance measures for comparing tracking algorithms. Other performance measures for tracking are proposed by [38] and [24], as well as in [36]. In the more recent PETS series, [29], VACE [39], and CLEAR [40] metrics were developed for evaluating the performance of multiple target detection and tracking, while in case of single object tracking evaluation there is no consensus and many variations of the same measures are being proposed.…”
Section: Tracking Evaluation Measuresmentioning
confidence: 99%
“…Examples include the framework introduced by PETS (Performance Evaluation of Tracking and Surveillance), ETISEO (Evaluation du Traitement et de l'Interpretation de Sequences vidEO), CAVIAR (Context Aware Vision using Imagebased Active Recognition), CLEAR (Classification of Events, Activities and Relationships). Other smaller-scale evaluation frameworks include comprehensive proposals such as the one in [1], and simple approaches such as the one based on "pseudo-synthetic video" sequences [2], on frame-based and object-based metrics [3], on the Label and Size Based Evaluation Measure (LSBEM) [4], or on measuring the tracking difficulty using a reflective model [5]. None of these frameworks has yet been widely taken up by the research This work was supported in part by the EU, under the FP7 project APIDIS (ICT-216023).…”
Section: Introductionmentioning
confidence: 99%