2021
DOI: 10.3991/ijim.v15i19.24139
|View full text |Cite
|
Sign up to set email alerts
|

A Development of Multi-Language Interactive Device using Artificial Intelligence Technology for Visual Impairment Person

Abstract: <p class="0abstract">The issue of lacking reference books in braille in most public building is crucial, especially public places like libraries, museum and others. The visual impairment or blind people is not getting the information like we normal vision do. Therefore, a multi languages reading device for visually impaired is built and designed to overcome the limitation of reference books in public places. Some research regarding current product available is done to develop a better reading device. Thi… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7

Relationship

0
7

Authors

Journals

citations
Cited by 7 publications
(6 citation statements)
references
References 4 publications
(6 reference statements)
0
6
0
Order By: Relevance
“…Teófilo et al (2018) combined AI techniques, including a speech-to-text algorithm for Portuguese, a sentence prediction algorithm for selecting the correct speech based on initial text, and a word correction algorithm to ensure the converted words are valid in the Portuguese language. Harum et al (2021) leveraged third party apps such as Google Cloud services, including the Cloud Vision API for image-to-text conversion, the Cloud Translation API for translating the converted text, and Google Cloud Text-to-Speech for converting the translated text into speech.…”
Section: Computer Visionmentioning
confidence: 99%
“…Teófilo et al (2018) combined AI techniques, including a speech-to-text algorithm for Portuguese, a sentence prediction algorithm for selecting the correct speech based on initial text, and a word correction algorithm to ensure the converted words are valid in the Portuguese language. Harum et al (2021) leveraged third party apps such as Google Cloud services, including the Cloud Vision API for image-to-text conversion, the Cloud Translation API for translating the converted text, and Google Cloud Text-to-Speech for converting the translated text into speech.…”
Section: Computer Visionmentioning
confidence: 99%
“…AI, including fuzzy logic, is being used to develop applications and mobile devices that are useful for those with visual impairment, including the elderly. Harum et al, for instance, have used AI technology and smartphone application technology such as the Digital Daisy Book Reader to develop a multi-language interactive book reading device that can be used by the visually impaired to 'read' information in public places [24]. Conversely, mobile apps for smartphones are increasingly being developed to help users manage a huge range of human needs, from finding ways for dysarthic children (children with a neurological disorder that damages their ability to speak) to communicate using 'daily usable conversation terms' [25], to encouraging inveterate online texters to find more polite ways of communicating [26].…”
Section: Literaturementioning
confidence: 99%
“…Database that can directly keep the data in a server [6], [13], [14], [16], [17] Limited display/dashboard: LCD/tool without an interactive user interface User-friendly dashboard for monitoring [12], [14], [17], [19], [15] No health check monitoring embedded Module to capture user temperature/health condition [11]- [19] RFID Attendance System with Contagious Disease Prevention Module Using Internet-of-Things Technology…”
Section: Open Issue Possible Solution Referencesmentioning
confidence: 99%