2020
DOI: 10.3390/electronics9122093
|View full text |Cite
|
Sign up to set email alerts
|

A Multimodal User Interface for an Assistive Robotic Shopping Cart

Abstract: This paper presents the research and development of the prototype of the assistive mobile information robot (AMIR). The main features of the presented prototype are voice and gesture-based interfaces with Russian speech and sign language recognition and synthesis techniques and a high degree of robot autonomy. AMIR prototype’s aim is to be used as a robotic cart for shopping in grocery stores and/or supermarkets. Among the main topics covered in this paper are the presentation of the interface (three modalitie… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
13
0
1

Year Published

2021
2021
2024
2024

Publication Types

Select...
4
3
2

Relationship

3
6

Authors

Journals

citations
Cited by 24 publications
(14 citation statements)
references
References 47 publications
0
13
0
1
Order By: Relevance
“…Within further research, it is considered to perform approbation of the proposed solution on different robotic platforms [26,27] to assess accuracy and operation speed of this solution on low-powered computational devices.…”
Section: Discussionmentioning
confidence: 99%
“…Within further research, it is considered to perform approbation of the proposed solution on different robotic platforms [26,27] to assess accuracy and operation speed of this solution on low-powered computational devices.…”
Section: Discussionmentioning
confidence: 99%
“…Multimodal interaction is also into the assistive robotic shopping cart for people with hearing impairments, presented in [53]. This work focuses on Russian sign language training through a single-handed gesture recognition and a touch control screen.…”
Section: Navigation and Human-robot Interactionmentioning
confidence: 99%
“…Наконец, решение проблемы набора данных, в конечном счете, сильно зависит от жестового языка, с которым работает исследователь. Авторы настоящей статьи представили собственный набор данных для русского жестового языка в трехмерном формате [53]; впоследствии этот набор данных использовался для системы автоматического распознавания жестов на русском жестовом языке [54][55][56]. Introduction: Currently, the recognition of gestures and sign languages is one of the most intensively developing areas in computer vision and applied linguistics.…”
Section: выводы по разделуunclassified