Proceedings of the 5th International Conference on Multimodal Interfaces - ICMI '03 2003
DOI: 10.1145/958436.958438
|View full text |Cite
|
Sign up to set email alerts
|

Mutual disambiguation of 3D multimodal interaction in augmented and virtual reality

Abstract: We describe an approach to 3D multimodal interaction in immersive augmented and virtual reality environments that accounts for the uncertain nature of the information sources. The resulting multimodal system fuses symbolic and statistical information from a set of 3D gesture, spoken language, and referential agents. The referential agents employ visible or invisible volumes that can be attached to 3D trackers in the environment, and which use a time-stamped history of the objects that intersect them to derive … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
33
0
1

Year Published

2005
2005
2018
2018

Publication Types

Select...
4
2
1

Relationship

1
6

Authors

Journals

citations
Cited by 30 publications
(34 citation statements)
references
References 16 publications
0
33
0
1
Order By: Relevance
“…The multimodal interaction architecture [14] used for developing a system allowing to flip a dual monitor in VR (Flip the monitor) and to change the color of a chair in AR is an architecture for multimodal interaction integrating voice recognition, gesture interaction and eye tracking to interact with a 3D agent. 2.…”
Section: Precedents and Contextmentioning
confidence: 99%
“…The multimodal interaction architecture [14] used for developing a system allowing to flip a dual monitor in VR (Flip the monitor) and to change the color of a chair in AR is an architecture for multimodal interaction integrating voice recognition, gesture interaction and eye tracking to interact with a 3D agent. 2.…”
Section: Precedents and Contextmentioning
confidence: 99%
“…Naturally occurring pointing gestures made from someone sitting across a table towards a single individual small element on a board sketch, say a milestone represented as a diamond shaped figure a few centimeters wide, would require a long time to be made, if it were to be unambiguously precise. Human beings overcome this difficulty by moving closer to the their targets, or by adding complementary information [17] via speech, disambiguating quick gestures towards an approximate region of focus by naming or describing the specific objects within this region [10].…”
Section: Distributed Collaboration Supportmentioning
confidence: 99%
“…Following MAVEN [10], the region of focus is expressed in terms of the intersection of a cone originating at the tip of the pointing arm and the surface of the interactive board. The angle of the cone is set based on the empirical evaluation (see Section 5) to indicate the pointing precision from a sitting position at a meeting table located at a distance from the interactive board.…”
Section: Distributed Collaboration Supportmentioning
confidence: 99%
See 2 more Smart Citations