2017
DOI: 10.1007/s11042-017-5472-5
|View full text |Cite
|
Sign up to set email alerts
|

Real-time pedestrian crossing lights detection algorithm for the visually impaired

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2018
2018
2022
2022

Publication Types

Select...
3
3
1

Relationship

0
7

Authors

Journals

citations
Cited by 34 publications
(26 citation statements)
references
References 20 publications
0
26
0
Order By: Relevance
“…Without even re-training our network, we tested our network on the China portion of the PTLR Dataset, which uses input images of size 1280 × 720 [7]. The only processing we did was to change all network predictions of class Table 7.…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…Without even re-training our network, we tested our network on the China portion of the PTLR Dataset, which uses input images of size 1280 × 720 [7]. The only processing we did was to change all network predictions of class Table 7.…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
“…"countdown green" and "countdown blank" to be a prediction of "none", because the images in the China section of the PTLR Dataset were only from three different classes: red, green, and none. After testing our network on the dataset, we compared our results to the results from [21] and [7]. As shown in Table 7, our network outperforms both methods in F1 Score.…”
Section: Comparison With Other Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…In (B), the model correctly predicted the class despite the symbol being underexposed by the camera. To prove the effectiveness of LYTNet, we retrained it using only red, green, and none class pictures from our own dataset and tested it on the PTLR dataset [5]. Due to the small size of the PTLR training dataset, we were unable to perform further training or fine-tuning using the dataset without significant overfitting.…”
Section: Methodsmentioning
confidence: 99%
“…Histogram of Oriented Gradients (HOG), as outlined in the study by Dalal and Triggs [1], is a feature descriptor that is commonly used for object detection. Its applications include: people detection in images and videos [1], pedestrian detection [2], palmprint recognition [3], sketch based image retrieval [4], scene text recognition [5], traffic sign detection [6], traffic light detection [7], and vehicle detection [8].…”
Section: Introductionmentioning
confidence: 99%