Proceedings of the 2009 International Conference on Multimodal Interfaces 2009
DOI: 10.1145/1647314.1647344
|View full text |Cite
|
Sign up to set email alerts
|

A fusion framework for multimodal interactive applications

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2010
2010
2018
2018

Publication Types

Select...
2
2
1

Relationship

1
4

Authors

Journals

citations
Cited by 5 publications
(3 citation statements)
references
References 10 publications
0
3
0
Order By: Relevance
“…SKEMMI implements computer vision techniques for mass detection, feature extraction and clustering, and incorporates augmented reality, multimodal high-level fusion [10], and multiple video projections. Mass clustering decomposes the input signals of the mass to form clusters (e.g., corresponding to a portion of the mass) to interpret their signals more accurately.…”
Section: Skemmi: a Development Environment For Mass-computer Interactionmentioning
confidence: 99%
See 1 more Smart Citation
“…SKEMMI implements computer vision techniques for mass detection, feature extraction and clustering, and incorporates augmented reality, multimodal high-level fusion [10], and multiple video projections. Mass clustering decomposes the input signals of the mass to form clusters (e.g., corresponding to a portion of the mass) to interpret their signals more accurately.…”
Section: Skemmi: a Development Environment For Mass-computer Interactionmentioning
confidence: 99%
“…Users produce individual signals that need to be fused by mass clustering, while the output should be subjected to multi-level fission [10] at the individual, cluster, and mass levels of granularity. SKEMMI displays individual signals via a pulsing aura and shows scores at the level of a user, a cluster of users (e.g., a region), a group (e.g., a team in an auditorium), and the mass, which promotes social recognition and stimulation in competition.…”
Section: Mass-computer Interaction Characteristicsmentioning
confidence: 99%
“…The key reasoning behind its use is that human interaction is multimodal in the real world [74,75], including interactions between an individual and trainer [76]. The order and pattern by which multimodal information should be integrated has long been the subject of study in the field of human-computer interactions [77][78][79][80][81]. To discuss the various ways by which multimodal feedback can be presented, the 2×2 classification of multimodal interfaces by Nigay and Coutaz [82] is used as a basis (Table 1).…”
Section: Multimodal Integrationmentioning
confidence: 99%