2005
DOI: 10.1007/1-4020-3933-6_14
|View full text |Cite
|
Sign up to set email alerts
|

Miamm — A Multimodal Dialogue System Using Haptics

Abstract: In this chapter we describe the MIAMM project. Its objective is the development of new concepts and techniques for user interfaces employing graphics, haptics and speech to allow fast and easy navigation in large amounts of data. This goal poses challenges as to how can the information and its structure be characterized by means of visual and haptic features, how the architecture of such a system is to be defined, and how we can standardize the interfaces between the modules of a multi-modal system.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
8
0

Year Published

2005
2005
2017
2017

Publication Types

Select...
5
4

Relationship

3
6

Authors

Journals

citations
Cited by 19 publications
(9 citation statements)
references
References 6 publications
0
8
0
Order By: Relevance
“…Finally, as regards interactive systems that generate data visualizations more in general, the vast majority of those are not focused on natural, conversational interaction: (Gao et al, 2015) does not provide two-way communication; the number of supported query types are limited in both (Cox et al, 2001) and (Reithinger et al, 2005), while (Sun et al, 2013) uses simple NLP methods that limit the extent of natural language understanding possible. EVIZA (Setlur et al, 2016), perhaps the closest project to our own, does provide a dialogue interface for users to explore visualizations; however, EVIZA focuses on supporting a user interacting with one existing visualization, and doesn't cover creating a new visualization, modifying the existing one, or interacting with more than one visualization at a time.…”
Section: Related Workmentioning
confidence: 99%
“…Finally, as regards interactive systems that generate data visualizations more in general, the vast majority of those are not focused on natural, conversational interaction: (Gao et al, 2015) does not provide two-way communication; the number of supported query types are limited in both (Cox et al, 2001) and (Reithinger et al, 2005), while (Sun et al, 2013) uses simple NLP methods that limit the extent of natural language understanding possible. EVIZA (Setlur et al, 2016), perhaps the closest project to our own, does provide a dialogue interface for users to explore visualizations; however, EVIZA focuses on supporting a user interacting with one existing visualization, and doesn't cover creating a new visualization, modifying the existing one, or interacting with more than one visualization at a time.…”
Section: Related Workmentioning
confidence: 99%
“…Systems like AutoBrief (Green et al, 2004) focus on producing graphics accompanied by text; or on finding the appropriate graphics to accompany existing text (Li et al, 2013). (Cox et al, 2001;Reithinger et al, 2005) were among the first to integrate a dialogue interface into an existing information visualization system, but they support only a small range of questions. Our own Articulate maps NL queries to statistical visualizations by using very simple NLP methods.…”
Section: Related Workmentioning
confidence: 99%
“…In earlier projects [27,17] we integrated different subcomponents into multimodal interaction systems. Thereby, hub-and-spoke dialogue frameworks played a major role [18].…”
Section: Related Workmentioning
confidence: 99%