2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) 2019
DOI: 10.1109/iccvw.2019.00312
|View full text |Cite
|
Sign up to set email alerts
|

Deep Learning Based Wearable Assistive System for Visually Impaired People

Abstract: In this paper, we propose a deep learning based assistive system to improve the environment perception experience of visually impaired (VI). The system is composed of a wearable terminal equipped with an RGBD camera and an earphone, a powerful processor mainly for deep learning inferences and a smart phone for touch-based interaction. A data-driven learning approach is proposed to predict safe and reliable walkable instructions using RGBD data and the established semantic map. This map is also used to help VI … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
36
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 55 publications
(39 citation statements)
references
References 31 publications
0
36
0
Order By: Relevance
“…DeepLabV3 is a semantic segmentation used to define 15 obstacles, such as a sidewalk, pole, building [ 60 ]. FuseNet generated semantic images to use with RGB and RGB-D images to provide walkable instructions for the user [ 44 ]. Duh et al [ 39 ] and Yang et al [ 47 ] used semantic segmentation to recognize obstacles.…”
Section: Real-time Navigationmentioning
confidence: 99%
See 3 more Smart Citations
“…DeepLabV3 is a semantic segmentation used to define 15 obstacles, such as a sidewalk, pole, building [ 60 ]. FuseNet generated semantic images to use with RGB and RGB-D images to provide walkable instructions for the user [ 44 ]. Duh et al [ 39 ] and Yang et al [ 47 ] used semantic segmentation to recognize obstacles.…”
Section: Real-time Navigationmentioning
confidence: 99%
“…The number of obstacles defines the number of covered objects in each dataset whether they were applied for a BVIP use case or not. As shown in the table, there is no dataset that defines all needed obstacles from BVIP’s perspectives [ 44 , 60 , 68 ]. Although Lin et al [ 44 ] built a dataset with 6000 obstacles for BVIP’s usage, this dataset contains only low-lying obstacles.…”
Section: Real-time Navigationmentioning
confidence: 99%
See 2 more Smart Citations
“…Lin et al [ 37 ] developed a wearable assistive system by generating collision-free instructions with touchscreen interactions to fully make use of semantic segmentation maps. In [ 38 , 39 ], instance-specific semantic segmentation was leveraged to help blind people to recognize objects in their surroundings by using state-of-the-art instance segmentation models such as Mask R-CNN [ 40 ].…”
Section: Related Workmentioning
confidence: 99%