2009 IEEE Intelligent Vehicles Symposium 2009
DOI: 10.1109/ivs.2009.5164251
|View full text |Cite
|
Sign up to set email alerts
|

Alerting the drivers about road signs with poor visual saliency

Abstract: Abstract-This paper proposes an improvement of Advanced Driver Assistance System based on saliency estimation of road signs. After a road sign detection stage, its saliency is estimated using a SVM learning. A model of visual saliency linking the size of an object and a size-independent saliency is proposed. An eye tracking experiment in context close to driving proves that this computational evaluation of the saliency fits well with human perception, and demonstrates the applicability of the proposed estimato… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2011
2011
2023
2023

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 28 publications
(22 citation statements)
references
References 21 publications
0
22
0
Order By: Relevance
“…The extracted information however is not directly usable for many applications as there are concerns that simply passing present signs to the driver merely leads to an increased cognitive load [15]. Some researchers try to tackle this problem directly using image based approaches like saliency estimation [11]. We argue that additional knowledge about the driver's attention, his goals and the traffic scene context (other relevant objects and relations to them) is needed to solve the issue satisfactorily.…”
Section: Related Workmentioning
confidence: 96%
“…The extracted information however is not directly usable for many applications as there are concerns that simply passing present signs to the driver merely leads to an increased cognitive load [15]. Some researchers try to tackle this problem directly using image based approaches like saliency estimation [11]. We argue that additional knowledge about the driver's attention, his goals and the traffic scene context (other relevant objects and relations to them) is needed to solve the issue satisfactorily.…”
Section: Related Workmentioning
confidence: 96%
“…Reference [10] is another large driver attention dataset, but only six coarse gaze regions were annotated and the exterior scene was not recorded. References [24] and [27] contain accurate driver attention maps made by averaging eye movements collected from human observers in-lab with simulated driving tasks. But the stimuli were static driving scene images and the sizes of their datasets are small (40 frames and 120 frames, respectively).…”
Section: Driver Attention Datasetsmentioning
confidence: 99%
“…b) Appearance and size of a traffic sign: The appearance of a traffic sign affects its visibility, so considering its intensity and color would be effective for visibility estimation [2], [3]. In addition, a licensed driver has learned and memorized the representative appearances of traffic signs as templates, although there are several kinds of colors and shapes of traffic signs.…”
Section: ) Local Featuresmentioning
confidence: 99%
“…({kdoman,y-mekada}@sist.chukyo-u.ac.jp) 2 Graduate School of Information Science, Nagoya University, Japan ({ide,murase}@is.nagoya-u.ac.jp) 3 Information and Communications Headquarters, Nagoya University, Japan (ddeguchi@nagoya-u.jp) 4 Faculty of Economics and Information, Gifu Shotoku Gakuen University, Japan (ttakahashi@gifu.shotoku.ac.jp) 5 DENSO CORPORATION, Japan a traffic scene. As shown in Fig.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation