2000
DOI: 10.1207/s15327051hci1504_1
|View full text |Cite
|
Sign up to set email alerts
|

Designing the User Interface for Multimodal Speech and Pen-Based Gesture Applications: State-of-the-Art Systems and Future Research Directions

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

2
139
0
8

Year Published

2002
2002
2015
2015

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 275 publications
(150 citation statements)
references
References 72 publications
2
139
0
8
Order By: Relevance
“…Tyto senzory jsou například: interakce na bázi pera (S.L. Oviatt, 2000), počítačová myš a klávesnice, joystick, senzory pro sledování pohybu, hmatové, tlakové či čichové a chuťové senzory (Fakhreddine Karray, 2008).…”
Section: Modalita Hci Interakceunclassified
“…Tyto senzory jsou například: interakce na bázi pera (S.L. Oviatt, 2000), počítačová myš a klávesnice, joystick, senzory pro sledování pohybu, hmatové, tlakové či čichové a chuťové senzory (Fakhreddine Karray, 2008).…”
Section: Modalita Hci Interakceunclassified
“…For example, a large amount of training data is required to build this type of systems, and multimodal training corpora are not readily available [39].…”
Section: Error Reduction By Designmentioning
confidence: 99%
“…The other fundamental type of multimodal architecture is called "semantic level architecture", and is more appropriate for modalities less tightly coupled such as speech and pen [39]. In this type of architecture, information is typically integrated at the semantic or pragmatic level [50].…”
Section: Automatic Detectionmentioning
confidence: 99%
“…Multimodal applications are now being built in different application domain including medecine [15], military [5] or telecommunication [12]. For example, lot of mobile phones offer two input modalities, a keyboard and a voice recognizer, to interact.…”
Section: Introductionmentioning
confidence: 99%