2020
DOI: 10.3390/e22090941
|View full text |Cite
|
Sign up to set email alerts
|

Efficient Multi-Object Detection and Smart Navigation Using Artificial Intelligence for Visually Impaired People

Abstract: Visually impaired people face numerous difficulties in their daily life, and technological interventions may assist them to meet these challenges. This paper proposes an artificial intelligence-based fully automatic assistive technology to recognize different objects, and auditory inputs are provided to the user in real time, which gives better understanding to the visually impaired person about their surroundings. A deep-learning model is trained with multiple images of objects that are highly relevant to the… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
24
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
4
3
1

Relationship

0
8

Authors

Journals

citations
Cited by 63 publications
(39 citation statements)
references
References 47 publications
0
24
0
Order By: Relevance
“…This mini-network contains only seven convolutional layers and approximately six grouping layers. We can say that it is about five times slower in the sampling required to obtain ideal detection processes [67,68]. The structure of this network outlines through its annexes a mini-network that can later increase speed but loses accuracy.…”
Section: Discussion On the Architecture And Algorithmsmentioning
confidence: 99%
“…This mini-network contains only seven convolutional layers and approximately six grouping layers. We can say that it is about five times slower in the sampling required to obtain ideal detection processes [67,68]. The structure of this network outlines through its annexes a mini-network that can later increase speed but loses accuracy.…”
Section: Discussion On the Architecture And Algorithmsmentioning
confidence: 99%
“…Duh et al [ 39 ] and Yang et al [ 47 ] used semantic segmentation to recognize obstacles. While Lin et al [ 61 ] switched between Faster R-CNN and YOLO on different modes, Joshi et al [ 68 ] used YOLO-v3. Chun et al [ 26 ] used laser (LiDAR) sensor measures to define the types of hazards (staircase, ramp, drainage, pothole, and step).…”
Section: Real-time Navigationmentioning
confidence: 99%
“…The number of obstacles defines the number of covered objects in each dataset whether they were applied for a BVIP use case or not. As shown in the table, there is no dataset that defines all needed obstacles from BVIP’s perspectives [ 44 , 60 , 68 ]. Although Lin et al [ 44 ] built a dataset with 6000 obstacles for BVIP’s usage, this dataset contains only low-lying obstacles.…”
Section: Real-time Navigationmentioning
confidence: 99%
“…The position coordinates and category probability of the target can be generated through the CNN to obtain the position coordinates and corresponding confidence of the target directly from the picture. Typical algorithms of the one-stage model include You Only Look Once (YOLO) [16] and SSD [17].…”
Section: Object Detection Algorithmmentioning
confidence: 99%
“…The YOLO series pursues detection speed at the expense of accuracy. The SSD algorithm strikes a balance between speed and accuracy, gaining a greater increase in speed at the expense of less decrease in accuracy [16,17]. However, in principle, the meter reading system in the substation does not allow false negatives, so this paper selects the Faster R-CNN model with higher detection accuracy and faster rate…”
Section: Algorithm Selectionmentioning
confidence: 99%