Proceedings of the 2022 International Conference on Multimodal Interaction 2022
DOI: 10.1145/3536221.3557034
|View full text |Cite
|
Sign up to set email alerts
|

Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces

Abstract: With the recently increasing capabilities of modern vehicles, novel approaches for interaction emerged that go beyond traditional touch-based and voice command approaches. Therefore, hand gestures, head pose, eye gaze, and speech have been extensively investigated in automotive applications for object selection and referencing. Despite these significant advances, existing approaches mostly employ a one-model-fits-all approach unsuitable for varying user behavior and individual differences. Moreover, current re… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

1
2
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(3 citation statements)
references
References 41 publications
1
2
0
Order By: Relevance
“…Although these approaches apply to different domains, we focus on the automotive domain as an example of the rich work on driver personalization. More specifically, we demonstrate our suggestion on some of our previous work in the field of adaptive user interaction for the automotive domain (Gomaa et al, 2020;Gomaa, 2022;Gomaa et al, 2022;Meiser et al, 2022;Feld et al, 2019); however, the underlying learning techniques are valid for other domains as well.…”
Section: Introductionsupporting
confidence: 53%
See 2 more Smart Citations
“…Although these approaches apply to different domains, we focus on the automotive domain as an example of the rich work on driver personalization. More specifically, we demonstrate our suggestion on some of our previous work in the field of adaptive user interaction for the automotive domain (Gomaa et al, 2020;Gomaa, 2022;Gomaa et al, 2022;Meiser et al, 2022;Feld et al, 2019); however, the underlying learning techniques are valid for other domains as well.…”
Section: Introductionsupporting
confidence: 53%
“…The first stage of the proposed plan is to understand the variances in driver behavior when performing the multimodal referencing task as in (Gomaa et al, 2020;2022;Rümelin et al, 2013). As an example, in the automotive domain, drivers perform different multimodal gestures to control the vehicle and query surrounding objects.…”
Section: Proposed Methodologymentioning
confidence: 99%
See 1 more Smart Citation