2019 IEEE 19th International Conference on Software Quality, Reliability and Security Companion (QRS-C) 2019
DOI: 10.1109/qrs-c.2019.00114
|View full text |Cite
|
Sign up to set email alerts
|

Safe Traffic Sign Recognition through Data Augmentation for Autonomous Vehicles Software

Abstract: Since autonomous vehicles operate in an open context, their software components, including data-driven ones, have to reliably process inputs (e.g., obtained by cameras) in order to make safe decisions. A key challenge when providing reliable data-driven components is insufficient training data, which could lead to wrong interpretation of the environment, thereby causing accidents. Aim: The goal of our research is to extend available training data of data-driven components for safe autonomous vehicles using the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
2
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 17 publications
(3 citation statements)
references
References 9 publications
0
2
0
Order By: Relevance
“…There are DL models whose dataset is increased when there is not enough data. This can be done through techniques such as obtaining new entries from those that already exist, carrying out transformations to these including cuts and rotations, among others [8,14,38]. Another way is using adversary examples, which C. Szegedy discovered in 2013, when he noticed that several machine learning and DL models are vulnerable to slightly different inputs from those that are correctly classified, i.e.…”
Section: Adversarial Attack: Fast Gradient Sign Methods (Fgsm)mentioning
confidence: 99%
“…There are DL models whose dataset is increased when there is not enough data. This can be done through techniques such as obtaining new entries from those that already exist, carrying out transformations to these including cuts and rotations, among others [8,14,38]. Another way is using adversary examples, which C. Szegedy discovered in 2013, when he noticed that several machine learning and DL models are vulnerable to slightly different inputs from those that are correctly classified, i.e.…”
Section: Adversarial Attack: Fast Gradient Sign Methods (Fgsm)mentioning
confidence: 99%
“…According to the analyses and studies performed together with the existing models, the algorithms were improved and subsequently configured in order to integrate them into the simulated scenarios, presenting themselves in the first simulations with satisfactory results and in accordance with the previously proposed hardware architecture. The erosion and analysis of input blocks were based on the analysis of an architecture based in MobileNetV2 and YOLOv3, improved by increasing the analysis capacity of several blocks, and using Neural Intel Coral increased processing time [57,58].…”
Section: Description Pedestrian Setup and Practical Scenariosmentioning
confidence: 99%
“…There are DL models whose dataset is augmented when there is not enough data. This can be done through techniques such as obtaining new entries from those that already exist, carrying out transformations to these including cuts and rotations, and synthetic data generation, among others ( Tian et al, 2018 ; Feinman et al, 2017 ; Jöckel, Kläs & Martínez-Fernández, 2019 ). Another way is using adversary inputs, which C. Szegedy discovered in 2013 when he noticed that several DL models are vulnerable to slightly different inputs from those that are correctly classified, i.e ., the adversarial inputs ( Szegedy et al, 2013 ).…”
Section: Introductionmentioning
confidence: 99%