2019
DOI: 10.1109/tip.2019.2896952
|View full text |Cite
|
Sign up to set email alerts
|

VSSA-NET: Vertical Spatial Sequence Attention Network for Traffic Sign Detection

Abstract: Although traffic sign detection has been studied for years and great progress has been made with the rise of deep learning technique, there are still many problems remaining to be addressed. For complicated real-world traffic scenes, there are two main challenges. Firstly, traffic signs are usually smallsize objects, which makes it more difficult to detect than large ones; Secondly, it is hard to distinguish false targets which resemble real traffic signs in complex street scenes without context information. T… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
62
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
4

Relationship

0
9

Authors

Journals

citations
Cited by 155 publications
(62 citation statements)
references
References 41 publications
0
62
0
Order By: Relevance
“…However, it is hard to use image segmentation in practical application on account of high computational cost. Attention-based detection methods [11], [12] are proposed to reduce the false detection cases arisen from similar false traffic signs. The network structures named Transformer with Multi-Head Attention [19] are designed for the machine translation in Natural Language Processing Transformer network structure discards the recursive unit and Multi-Head Attention structure explores the relationship between input and output in parallel to increase the computational speed.…”
Section: B Motivationmentioning
confidence: 99%
See 1 more Smart Citation
“…However, it is hard to use image segmentation in practical application on account of high computational cost. Attention-based detection methods [11], [12] are proposed to reduce the false detection cases arisen from similar false traffic signs. The network structures named Transformer with Multi-Head Attention [19] are designed for the machine translation in Natural Language Processing Transformer network structure discards the recursive unit and Multi-Head Attention structure explores the relationship between input and output in parallel to increase the computational speed.…”
Section: B Motivationmentioning
confidence: 99%
“…The methods [9], [10]based on the image segmentation to detect the traffic signs under complex environment. Some attention-based detection methods [11], [12] gain the ROI from the input image by attention module to fine the features in a large sophisticated background. These two ways enhance the performance of small traffic sign detection and reduce the cases of false detection.…”
Section: Introduction a Backgroundmentioning
confidence: 99%
“…Tian et al [36] introduced attention mechanism in traffic sign detection task, and improved detection accuracy by combining local context information. In [37], traffic signs were regarded as small targets in a specific mode, and traffic sign detection was regarded as regional sequence classification and regression task. By using the attention mechanism, the local sequence of the region was modeled explicitly to obtain more context information and improve the detection accuracy.…”
Section: Traffic Sign Detectionmentioning
confidence: 99%
“…This algorithm can detect more accurate road regions than other traditional methods, and the use of location prior can promote the detection performance effectively. In 2019, Yuan et al [20] presented an end-to-end deep learning method for traffic sign detection in complex environments. The algorithm not only utilizes the densely connected deconvolution layer and frequency hopping connection but also proposes a vertical spatial sequence attention module to obtain more context information to achieve better detection performance.…”
Section: Introductionmentioning
confidence: 99%