2018
DOI: 10.3390/mti2040081
|View full text |Cite
|
Sign up to set email alerts
|

Semantic Fusion for Natural Multimodal Interfaces using Concurrent Augmented Transition Networks

Abstract: Semantic fusion is a central requirement of many multimodal interfaces. Procedural methods like finite-state transducers and augmented transition networks have proven to be beneficial to implement semantic fusion. They are compliant with rapid development cycles that are common for the development of user interfaces, in contrast to machine-learning approaches that require time-costly training and optimization. We identify seven fundamental requirements for the implementation of semantic fusion: Action derivati… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...

Citation Types

0
1
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(1 citation statement)
references
References 55 publications
0
1
0
Order By: Relevance
“…Figure 2: Examples of XR-AI integrations.From upper left to lower right: A user is interacting with an intelligent virtual agent to solve a construction task(Latoschik, 2005) and interaction with an agent actor in Madam Bovary, an interactive intelligent story telling piece(Cavazza et al, 2007), both in a CAVE(Cruz-Neira et al, 1992); Virtual agents in an Augmented Reality (AR)(Obaid et al, 2012) and in a Mixed Reality (MR)(Kistler et al, 2012); Speech and gesture interaction in a virtual construction scenario in front of a power wall(Latoschik and Wachsmuth, 1998) and in a CAVE(Latoschik, 2005). Multimodal interactions in game-like scenarios full-immersed using a Head-Mounted Display (HMD)(Zimmerer et al, 2018b) and placed at an MR tabletop(Zimmerer et al, 2018a).…”
mentioning
confidence: 99%
“…Figure 2: Examples of XR-AI integrations.From upper left to lower right: A user is interacting with an intelligent virtual agent to solve a construction task(Latoschik, 2005) and interaction with an agent actor in Madam Bovary, an interactive intelligent story telling piece(Cavazza et al, 2007), both in a CAVE(Cruz-Neira et al, 1992); Virtual agents in an Augmented Reality (AR)(Obaid et al, 2012) and in a Mixed Reality (MR)(Kistler et al, 2012); Speech and gesture interaction in a virtual construction scenario in front of a power wall(Latoschik and Wachsmuth, 1998) and in a CAVE(Latoschik, 2005). Multimodal interactions in game-like scenarios full-immersed using a Head-Mounted Display (HMD)(Zimmerer et al, 2018b) and placed at an MR tabletop(Zimmerer et al, 2018a).…”
mentioning
confidence: 99%