2015
DOI: 10.48550/arxiv.1511.04136
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

UA-DETRAC: A New Benchmark and Protocol for Multi-Object Detection and Tracking

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
43
0

Year Published

2019
2019
2024
2024

Publication Types

Select...
5
2
1

Relationship

1
7

Authors

Journals

citations
Cited by 30 publications
(43 citation statements)
references
References 70 publications
0
43
0
Order By: Relevance
“…The multi-object tracking (MOT) is a critical task for autonomous driving, as it needs to perform object detection as well as object tracking in a video. A large array of datasets have been created focusing on driving scenarios, for example, KITTI tracking [10], MOTChallenge [31], UA-DETRAC [42], PathTrack [30], and Pose-Track [1]. None of these datasets provide segmentation masks for the annotated objects and thus do not depict pixellevel representations and complex interactions like MOTS Figure 2.…”
Section: Related Workmentioning
confidence: 99%
“…The multi-object tracking (MOT) is a critical task for autonomous driving, as it needs to perform object detection as well as object tracking in a video. A large array of datasets have been created focusing on driving scenarios, for example, KITTI tracking [10], MOTChallenge [31], UA-DETRAC [42], PathTrack [30], and Pose-Track [1]. None of these datasets provide segmentation masks for the annotated objects and thus do not depict pixellevel representations and complex interactions like MOTS Figure 2.…”
Section: Related Workmentioning
confidence: 99%
“…The UA-DETRAC Benchmark Suite contains 10 hours of video containing traffic sequences divided into 60 training and 40 testing videos. The training and test data contain an average of 7.1 and 12.0 objects per frame, respectively [48]. LBT-extended trackers are evaluated according to the PR-metrics defined in [48] which evaluate tracking performance at varying levels of detection confidence levels.…”
Section: Ua-detracmentioning
confidence: 99%
“…The training and test data contain an average of 7.1 and 12.0 objects per frame, respectively [48]. LBT-extended trackers are evaluated according to the PR-metrics defined in [48] which evaluate tracking performance at varying levels of detection confidence levels. Each tracker is evaluated on the training dataset with varying numbers of frames between detection d. Tracking is performed at d = 0, 1, 3, 7, 15 and 31 frames.…”
Section: Ua-detracmentioning
confidence: 99%
See 1 more Smart Citation
“…Existing vision-based multiple object tracking (MOT) datasets and benchmarks, such as KITTI [30], MOTChallenge [22], UA-DETRAC [107] and NuScenes [16] have been instrumental for advancing and monitoring the progress of MOT methods in well-controlled settings. Here, the central task is the detection and tracking of multiple objects from a predefined closed set of classes, such as cars and pedestrians.…”
Section: Introductionmentioning
confidence: 99%