2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW) 2018
DOI: 10.1109/cvprw.2018.00022
|View full text |Cite
|
Sign up to set email alerts
|

Single-Camera and Inter-Camera Vehicle Tracking and 3D Speed Estimation Based on Fusion of Visual and Semantic Features

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
87
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
4
4
1

Relationship

2
7

Authors

Journals

citations
Cited by 132 publications
(87 citation statements)
references
References 14 publications
0
87
0
Order By: Relevance
“…Another state-of-the-art vehicle ReID method [43] is the winner of the vehicle ReID track in the AI City Challenge Workshop at CVPR 2018 [31], which is based on fusing visual and semantic features (FVS). This method extracts 1,024-dimension CNN features from a GoogLeNet [39] pre-trained on the CompCars benchmark [53].…”
Section: Image-based Reidmentioning
confidence: 99%
See 1 more Smart Citation
“…Another state-of-the-art vehicle ReID method [43] is the winner of the vehicle ReID track in the AI City Challenge Workshop at CVPR 2018 [31], which is based on fusing visual and semantic features (FVS). This method extracts 1,024-dimension CNN features from a GoogLeNet [39] pre-trained on the CompCars benchmark [53].…”
Section: Image-based Reidmentioning
confidence: 99%
“…Tab 9 shows the results of various methods for spatiotemporal association, MTSC tracking, and image-based ReID on CityFlow. Note that PROVID [29] compares visual features first, then uses spatio-temporal information for re-ranking; whereas 2WGMMF [20] and FVS [43] first model the spatio-temporal transition based on online learning or manual measurements, and then perform imagebased ReID only on the confident pairs. Note also that, since only trajectories spanning multiple cameras are included in the evaluation, different from MTSC tracking, false positives are considered in the calculation of MTMC tracking accuracy.…”
Section: Mtmc Trackingmentioning
confidence: 99%
“…The tracking method of [69] has originally been applied for human tracking [61], which was shown to suffer from severe identity-switching issues. There are also traditional approaches based on computer vision techniques that are still popular [22,72,73]. In such approaches, discriminative methods using hand-crafted features, such as scale-invariant feature transforms (SIFT), speeded up robust features (SURF), region-based features or edge-based features, are applied for re-identifying vehicles [74][75][76][77][78].…”
Section: Related Workmentioning
confidence: 99%
“…Deep learning vehicle detection can be split into two different model strategies: 1) a single shot object detector (SSD, YOLO, YOLOv2, and YOLOv3) and 2) a region-based object detector (R-CNN, Fast R-CNN, and Faster R-CNN). Recent papers such as Tang et al [1] and Sang et al [2] demonstrate the success that YOLOv2 has had on object detection in the 2018 AI City Challenge. In this paper, a PyTorch version of Redmon's [3] YOLOv3 model is applied to vehicle images from the Nexar Challenge 2 dataset, NEXET [4].…”
Section: Introductionmentioning
confidence: 99%