2018
DOI: 10.1007/978-3-319-78452-6_10
|View full text |Cite
|
Sign up to set email alerts
|

Intelligent Smart Glass for Visually Impaired Using Deep Learning Machine Vision Techniques and Robot Operating System (ROS)

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
14
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 18 publications
(14 citation statements)
references
References 5 publications
0
14
0
Order By: Relevance
“…Embodied assistive technology includes mobility devices [3,4] (e.g., wheelchairs, prostheses, exoskeletons, or artificial limbs); specialized aids (e.g., hearing [5], vision [6][7][8], cognition [9], or communication [10]); and specific hardware, software, and peripherals that assist people with disabilities with accessing information technologies (e.g., computers and mobile devices). Although these systems provide valued help, they usually offer just one functionality and lack much intelligence (intelligence being understood as the ability to receive feedback from the environment and adapt their behavior).…”
Section: Introductionmentioning
confidence: 99%
“…Embodied assistive technology includes mobility devices [3,4] (e.g., wheelchairs, prostheses, exoskeletons, or artificial limbs); specialized aids (e.g., hearing [5], vision [6][7][8], cognition [9], or communication [10]); and specific hardware, software, and peripherals that assist people with disabilities with accessing information technologies (e.g., computers and mobile devices). Although these systems provide valued help, they usually offer just one functionality and lack much intelligence (intelligence being understood as the ability to receive feedback from the environment and adapt their behavior).…”
Section: Introductionmentioning
confidence: 99%
“…In detail, the overall accuracy of the system based on the proposed method was estimated to be 85.7%, when the methodology proposed in [38] produced an accuracy of 72.6%, based on the dataset described in Section 4.1. Additionally, in contrast to other methodologies such as [2,26,27,31,32], the proposed obstacle detection and recognition system is solely based on visual cues obtained using only an RGB-D sensor, minimizing the computational and energy resources required for the integration, fusion, and synchronization of multiple sensors.…”
Section: Discussionmentioning
confidence: 99%
“…In [3], the authors proposed a joint object detection, tracking and recognition in the context of the DEEP-SEE framework. Regarding wearable Sensors 2020, 20, 2385 5 of 27 navigation aids for VCP, an intelligent smart glass system, which exploits deep learning machine vision techniques and the Robotic Operating System, was proposed in [2]. The system uses three CNN models, namely, the Faster Region-Based CNN [33], You Only Look Once (YOLO) CNN model [34], and Single Shot multi-box Detectors (SSDs) [35].…”
Section: Obstacle Detectionmentioning
confidence: 99%
See 1 more Smart Citation
“…The task of visual servoing [29][30] (Moving the camera to a desired orientation) is very similar to the visual odometry problem requiring pose estimation for a different purpose [31][32][33]. These schemes are not only useful for navigation of rovers on surfaces of other planets such as Mars [34] but are also useful for tracking of satellites that needs to be repaired using a servicer [35]. Although these VO techniques have shown promising results for variety of these applications, they are sensitive to environmental changes such as lighting conditions, surrounding texture, the presence of water, snow, etc.…”
Section: …………………………………………………………………………………………………… Introduction:-mentioning
confidence: 99%