2017
DOI: 10.3233/ais-170459
|View full text |Cite
|
Sign up to set email alerts
|

IR stereo RealSense: Decreasing minimum range of navigational assistance for visually impaired individuals

Abstract: Introduction of RGB-D sensors is a revolutionary force that offers a portable, versatile and cost-effective solution of navigational assistance for the visually impaired. RGB-D sensors on the market such as Microsoft Kinect, Asus Xtion and Intel RealSense are mature products, but all have a minimum detecting distance of about 800mm. This results in the loss of depth information and the omission of short-range obstacles, posing a significant risk on navigation. This paper puts forward a simple and effective app… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3

Citation Types

0
15
0

Year Published

2018
2018
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 22 publications
(15 citation statements)
references
References 31 publications
0
15
0
Order By: Relevance
“…The development of the CV has a close relationship with the stereo vision sensor. The RGB-D sensor has also received rising attention and been used widely because of its outstanding performance [5][6][7] . They provide much more information compared the traditional assistive tools, which are able to acquire color information and perceive the environment in three dimensions at video frame rates.…”
Section: Introductionmentioning
confidence: 99%
“…The development of the CV has a close relationship with the stereo vision sensor. The RGB-D sensor has also received rising attention and been used widely because of its outstanding performance [5][6][7] . They provide much more information compared the traditional assistive tools, which are able to acquire color information and perceive the environment in three dimensions at video frame rates.…”
Section: Introductionmentioning
confidence: 99%
“…These trends have accelerated the proliferation of monocular detectors and cost-effective RGB-Depth (RGB-D) sensors [ 5 ], supposing essential prerequisites to aid perception and navigation in visually-impaired individuals by leveraging robotic vision [ 7 ]. Along this line, a broad variety of navigational assistive technologies have been developed to accomplish specific goals including avoiding obstacles [ 8 , 9 , 10 , 11 , 12 , 13 , 14 , 15 , 16 , 17 ], finding paths [ 18 , 19 , 20 , 21 , 22 , 23 , 24 , 25 , 26 , 27 , 28 , 29 ], locating sidewalks [ 30 , 31 , 32 , 33 ], ascending stairs [ 34 , 35 , 36 , 37 , 38 ] or descending steps [ 39 , 40 ] and negotiating water hazards [ 41 ].…”
Section: Introductionmentioning
confidence: 99%
“…Arguably, for navigation assistance, an even greater concern lies in the depth data from almost all commercial 3D sensors, which suffer from a limited depth range and could not maintain the robustness across various environments [ 22 , 26 , 29 , 37 ]. Inevitably, approaches based on a stereo camera or light-coding RGB-D sensor generally perform range expansion [ 13 , 14 ], depth enhancement [ 22 ] or depend on both visual and depth information to complement each other [ 23 ]. Not to mention the time consumption in these steps, underlying assumptions were frequently made such as: the ground plane is the biggest area [ 9 , 10 ]; the area directly in front of the user is accessible [ 18 , 19 ]; and variant versions of flat world [ 24 , 36 ], Manhattan world [ 23 , 27 , 35 , 38 ] or stixel world assumptions [ 15 , 25 , 41 ].…”
Section: Introductionmentioning
confidence: 99%
“…The authors of [27] aimed to the development of a method integrated in a wearable device for the efficient place recognition using multimodal data. In [28], a unifying terrain awareness framework was proposed, extending the basic vision system based on an IR RGB-D sensor proposed in [10] and aiming at achieving efficient semantic understanding of the environment. The above approach, combined with a depth segmentation method, was integrated into a wearable navigation system.…”
mentioning
confidence: 99%