2020
DOI: 10.1109/access.2020.2965089
|View full text |Cite
|
Sign up to set email alerts
|

Neural Network Generalization: The Impact of Camera Parameters

Abstract: We quantify the generalization of a convolutional neural network (CNN) trained to identify cars. First, we perform a series of experiments to train the network using one image dataset -either synthetic or from a camera -and then test on a different image dataset. We show that generalization between images obtained with different cameras is roughly the same as generalization between images from a camera and ray-traced multispectral synthetic images. Second, we use ISETAuto, a soft prototyping tool that creates … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
27
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
3
1

Relationship

2
7

Authors

Journals

citations
Cited by 38 publications
(27 citation statements)
references
References 35 publications
0
27
0
Order By: Relevance
“…Each scene is unique and the collection is designed to maximize the diversity of the images of daytime driving scenes. In previous work, we quantified how well training on the ISETAuto dataset generalizes to real-world datasets, including KITTI, CityScape, Baidu-Apollo, and Berkeley Deep Drive [25]. In this paper we add new comparisons with a Waymo dataset.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Each scene is unique and the collection is designed to maximize the diversity of the images of daytime driving scenes. In previous work, we quantified how well training on the ISETAuto dataset generalizes to real-world datasets, including KITTI, CityScape, Baidu-Apollo, and Berkeley Deep Drive [25]. In this paper we add new comparisons with a Waymo dataset.…”
Section: Methodsmentioning
confidence: 99%
“…First, we provide an open-source, freely available image systems simulation toolbox that models camera images and LiDAR images in relatively complex 3D automotive scenes. We use the image system simulation to sweep out a much larger range of system designs [26] and create datasets that generalize better than the widely-used KITTI data sets [25].…”
Section: Our Contributions Arementioning
confidence: 99%
“…The apparent necessity of developing software simulations of the camera sensor has made camera design an active computer vision research area that utilizes data generation methods [BJS17]. This task has been lately popular within the autonomous driving community, where the image synthesis methods are well established, and recent studies show the impact of camera effects in the learning pipeline [CSVJR18, LLFW20]. The introduced data generation techniques rely both on non‐procedural and procedural physically based modelling and employ offline physically based rendering, which leverages the modern cloud‐scale job scheduling possibilities to improve the rendering times [BFL*18, LSZ*19, LLFW19].…”
Section: Image Synthesis Methods Overviewmentioning
confidence: 99%
“…It should be noted that the impact of illumination on the effectiveness of the sensor system is obviously less than that on the conventional manned ship. However, based on the research of the previous literature on the impact of daylight on the sensor performance of the camera [32] and lidar [33], the illumination condition is still an important factor.…”
Section: Bayes Network Of Sensor Effectivementioning
confidence: 99%