The 2013 International Joint Conference on Neural Networks (IJCNN) 2013
DOI: 10.1109/ijcnn.2013.6706809
|View full text |Cite
|
Sign up to set email alerts
|

Traffic sign detection with VG-RAM weightless neural networks

Abstract: We present a biologically inspired approach to traffic sign detection based on Virtual Generalizing Random Access Memory Weightless Neural Networks (VG-RAM WNN). VG-RAM WNN are effective machine learning tools that offer simple implementation and fast training and test. Our VG-RAM WNN architecture models the saccadic eye movement system and the transformations suffered by the images captured by the eyes from the retina to the superior colliculus in the mammalian brain. We evaluated the performance of our VG-RA… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2015
2015
2021
2021

Publication Types

Select...
3
3

Relationship

2
4

Authors

Journals

citations
Cited by 7 publications
(5 citation statements)
references
References 18 publications
(23 reference statements)
0
5
0
Order By: Relevance
“…The saccadic system has inspired similar mechanisms in computer vision. For example, De Souza et al [7] implement a variant of the retinal log-polar transform for a neural network to detect traffic signs using a small dataset. Note that there are also computational models that implement aspects of natural visual attention accurately [8,9,19,21].…”
Section: Attention Mechanismsmentioning
confidence: 99%
“…The saccadic system has inspired similar mechanisms in computer vision. For example, De Souza et al [7] implement a variant of the retinal log-polar transform for a neural network to detect traffic signs using a small dataset. Note that there are also computational models that implement aspects of natural visual attention accurately [8,9,19,21].…”
Section: Attention Mechanismsmentioning
confidence: 99%
“…Furthermore, some methods worked only with recognition and not with detection, perhaps because of the lack of data. Only after large databases were made available (such as the well-known German Traffic Sign Recognition (GTSRB) (Stallkamp et al, 2012) and Detection (GTSDB) (Houben et al, 2013) Benchmarks, with 51,839 and 900 frames, respectively) that learning-based approaches (Houben et al, 2013;Mathias et al, 2013) could finally show their power, although some of them were able to cope with fewer examples (De Souza et al, 2013a). With the release of even larger databases (such as STSD (Larsson & Felsberg, 2011) with over 20,000 frames, LISA (Jensen et al, 2016b) with 6,610 frames, BTS (Mathias et al, 2013) 25,634 frames for detection and 7,125 frames for classification, and Tsinghua-Tencent 100K (Zhu et al, 2016) with 100,000 frames), learning-based approaches improved and achieved far better results when compared to their model-based counterparts.…”
Section: Traffic Sign Detection and Recognitionmentioning
confidence: 99%
“…It is part of the TSD subsystem. The Traffic Sign Detector module (De Souza et al, 2013b;Torres et al, 2019) detects and recognizes (Berger et al, 2013) traffic signs along the path from images captured by the front camera (see video at https://youtu.be/SZ9w1XBWJqE). It is part of the TSD subsystem.…”
Section: Architecture Of Iara's Softwarementioning
confidence: 99%
“…Once an entire cluster is projected into an image, the bounding box of the projected points is computed, and the original image is cropped with a margin of 25% of the bounding box size, to deal with calibration errors and add some background to the detection (Figure 5d). This way, several images of the same sign can be obtained, and the recognition process can be carried out using RGB images with the following advantages with respect to approaches that only use images (Soheilian et al 2013, De Souza et al 2013): 1) The detection problem is almost solved, as the location of the traffic signs is known beforehand. 2) Images of the same traffic sign can be stored and analysed together, hence a single classification result is expected for each set of input images ( Figure 6).…”
Section: Traffic Sign Projection Onto 2d Imagesmentioning
confidence: 99%