2013
DOI: 10.1007/978-3-642-40602-7_35
|View full text |Cite
|
Sign up to set email alerts
|

Framework for Generation of Synthetic Ground Truth Data for Driver Assistance Applications

Abstract: Abstract. High precision ground truth data is a very important factor for the development and evaluation of computer vision algorithms and especially for advanced driver assistance systems. Unfortunately, some types of data, like accurate optical flow and depth as well as pixel-wise semantic annotations are very difficult to obtain. In order to address this problem, in this paper we present a new framework for the generation of high quality synthetic camera images, depth and optical flow maps and pixel-wise se… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
31
0

Year Published

2014
2014
2020
2020

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 52 publications
(33 citation statements)
references
References 27 publications
(45 reference statements)
1
31
0
Order By: Relevance
“…The UnrealStereo data set [ZQC*18], on the other hand, is designed for disparity estimation using non‐procedural and physically based modelled game scenes implemented and rendered in Unreal Engine 4 [UE4]. The majority of synthetic data generation frameworks for depth estimation rely on game/simulator engines for urban and traffic scenes [HUI13, ASS16, GWCV16] (Figure 10b), while Varol et al . [VRM*17] provide depth maps for human parts depth estimation.…”
Section: Image Synthesis Methods Overviewmentioning
confidence: 99%
See 2 more Smart Citations
“…The UnrealStereo data set [ZQC*18], on the other hand, is designed for disparity estimation using non‐procedural and physically based modelled game scenes implemented and rendered in Unreal Engine 4 [UE4]. The majority of synthetic data generation frameworks for depth estimation rely on game/simulator engines for urban and traffic scenes [HUI13, ASS16, GWCV16] (Figure 10b), while Varol et al . [VRM*17] provide depth maps for human parts depth estimation.…”
Section: Image Synthesis Methods Overviewmentioning
confidence: 99%
“…Image synthesis for this task has been one of the most active research areas over the past decade. Driving simulators and computer games with urban and traffic scenes revolutionized the way training data were generated by collecting images from already existing virtual worlds [HUI13, ASS16]. Numerous data generation approaches build upon extracting images and video sequences from the GTA‐V [GTA] commercial computer game, utilizing dedicated middleware game mods, with the main issue to be the ground truth annotation process.…”
Section: Image Synthesis Methods Overviewmentioning
confidence: 99%
See 1 more Smart Citation
“…From there, the use of synthetic visual data generated from virtual environments has kept growing. We found works using synthetic data for object detection/recognition [66][67][68][69], object viewpoint recognition [70], re-identification [71], and human pose estimation [72]; building synthetic cities for autonomous driving tasks such as semantic segmentation [44,73], place recognition [74], object tracking [45,75], object detection [76,77], stixel computation [78], and benchmarking different on-board computer vision tasks [47]; building indoor scenes for semantic segmentation [79], as well as normal and depth estimation [80]; generating GT for optical flow, scene flow, and disparity [81,82]; generating augmented reality images to support object detection [83]; simulating adverse atmospheric conditions such as rain or fog [84,85]; even performing procedural generation of videos for human action recognition [86,87]. Moreover, since robotics and autonomous driving rely on sensorimotor models worthy of being trained and tested dynamically, in the last years, the use of simulators has been intensified beyond datasets [48,49,88,89].…”
Section: Related Workmentioning
confidence: 99%
“…Furthermore, the manual labeling of acquired in-vivo or in-vitro data may be prohibitively expensive. Labeled insilico images for simulated critical traffic scenes can be automatically generated with appropriate frameworks [49], driving simulators and several large collections of synthetic ground truth data for benchmarking scene analysis solutions in the context of autonomous driving are available [50]. Deep learning based semantic segmentation with a domain adapted VGG16 network over mixtures of labeled in-silico and unlabeled data can perform considerably better compared against purely using in-vivo data [51].…”
Section: Use Case 'Use Of Synthetic Data For Simulated Autonomous Drimentioning
confidence: 99%