2020 IEEE 6th International Conference on Computer and Communications (ICCC) 2020
DOI: 10.1109/iccc51575.2020.9345244
|View full text |Cite
|
Sign up to set email alerts
|

Improved U-net for Zebra-crossing Image Segmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
1
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(2 citation statements)
references
References 10 publications
0
1
0
Order By: Relevance
“…In [57], the U-Net model was proposed for pedestrian crossing segmentation in images. A dataset of 150 training images and 30 validation images was created.…”
Section: Related Workmentioning
confidence: 99%
“…In [57], the U-Net model was proposed for pedestrian crossing segmentation in images. A dataset of 150 training images and 30 validation images was created.…”
Section: Related Workmentioning
confidence: 99%
“…II. LITERATURE REVIEW Zebra-crossings are detected by looking for groups of concurrent lines [1] where three methods are used for color detection and segmentation which includes RGB images being converted into IHLS color space and these methods are tested on outdoor images [2] and many other threshold image techniques such as Gaussian filter, Canny edge detection Contour, and Fit Ellipse [3][4] are used for traffic sign recognition with Kalman filter [5] which also includes Block-based Hough proposed by Yu-Quin Bao [6] and transform and directional variance techniques [7], a novel approach to detect and locate the zebra-crossings and the system is found out to be feasible for use on public roads around the world [8] to obtain 13,40 high-quality photo-realistic images from the video from 13 classes of various objects [9]. The design of a low-power, low-latency electronic mobility assistance for blind persons revealed that decision trees, random forests, and KNNs may all be used to recognise objects [10].…”
Section: Introductionmentioning
confidence: 99%