2019
DOI: 10.1007/978-3-030-21565-1_8
|View full text |Cite
|
Sign up to set email alerts
|

Exploring Temporal Dependencies in Multimodal Referring Expressions with Mixed Reality

Abstract: In collaborative tasks, people rely both on verbal and nonverbal cues simultaneously to communicate with each other. For humanrobot interaction to run smoothly and naturally, a robot should be equipped with the ability to robustly disambiguate referring expressions. In this work, we propose a model that can disambiguate multimodal fetching requests using modalities such as head movements, hand gestures, and speech. We analysed the acquired data from mixed reality experiments and formulated a hypothesis that mo… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
1
0

Year Published

2020
2020
2022
2022

Publication Types

Select...
4
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 26 publications
0
1
0
Order By: Relevance
“…Increasing attention has been put on development of intuitive and seamless HRC systems where human intention is recognized to allow for adapting robot behavior in real-time. Thus, human body movement prediction [11], gaze and gesture recognition [12] have been put forward as more intuitive means for collaboration in comparison to other cues [1], e.g., auditory, force/pressure [13], bio-signals, etc. We review relevant work based on the three problems considered in this work in the scope of human-robot collaborative systems: human action recognition, dealing with and modeling the uncertainty, and robot action planning and execution.…”
Section: Related Workmentioning
confidence: 99%
“…Increasing attention has been put on development of intuitive and seamless HRC systems where human intention is recognized to allow for adapting robot behavior in real-time. Thus, human body movement prediction [11], gaze and gesture recognition [12] have been put forward as more intuitive means for collaboration in comparison to other cues [1], e.g., auditory, force/pressure [13], bio-signals, etc. We review relevant work based on the three problems considered in this work in the scope of human-robot collaborative systems: human action recognition, dealing with and modeling the uncertainty, and robot action planning and execution.…”
Section: Related Workmentioning
confidence: 99%