2020
DOI: 10.1109/thms.2020.3027534
|View full text |Cite
|
Sign up to set email alerts
|

An AI-Based Visual Aid With Integrated Reading Assistant for the Completely Blind

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
31
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
2
2

Relationship

0
9

Authors

Journals

citations
Cited by 80 publications
(31 citation statements)
references
References 39 publications
0
31
0
Order By: Relevance
“…The aforementioned system is divided into two parts each, with a separate microcontroller, which increases its cost and complexity. Another AI-based visual aid system for the completely blind was proposed [ 31 ]. The system consists of a camera and sensors for obstacle avoidance, image processing algorithms for object detection, and an integrated reading assistant to read images and produce output in the form of audio.…”
Section: Related Workmentioning
confidence: 99%
“…The aforementioned system is divided into two parts each, with a separate microcontroller, which increases its cost and complexity. Another AI-based visual aid system for the completely blind was proposed [ 31 ]. The system consists of a camera and sensors for obstacle avoidance, image processing algorithms for object detection, and an integrated reading assistant to read images and produce output in the form of audio.…”
Section: Related Workmentioning
confidence: 99%
“…Transient errors, while not permanently damaging hardware components, can cause transient bit flips (from 0 to 1 or 1 to 0) that can cause run failures or incorrect run results. Researchers in academia and industry typically take advantage of the redundancy inherent in modern systems and the availability of redundant resources (such as extra cores, time, or threads) to provide effective solutions to tolerate soft errors [21]. Error detection, error containment, and error recovery are often the basic in providing reliability against soft errors.…”
Section: Ind-var11mentioning
confidence: 99%
“…The third class of devices include Voice Synthesizers, Braille Output terminals and magnifiers that help the user to read and know the information. A smart wearable glass has been developed using Raspberry Pi (RPi) and Ultrasonic sensor by Khan et al [1]. In their work, they have developed modules for obstacle detection and reading assistant.…”
Section: Related Workmentioning
confidence: 99%
“…The Encoder is a Convolutional Neural Network (CNN) that generates a fixed size feature vector for the input image. Instead of the VGG16 model [10] used by [9] or the ResNet-101 model used by [1] , this paper uses the Inception-V3 model trained on the Imagenet dataset as the encoder. Inception-V3 [11] has been chosen because it acts as a multiple feature extractor by computing 1x1, 3x3 and 5x5 convolutions within the same module of the inception network.…”
Section: Encodermentioning
confidence: 99%