Real World Speech Processing 2004
DOI: 10.1007/978-1-4757-6363-8_8
|View full text |Cite
|
Sign up to set email alerts
|

Speech and Language Processing for Multimodal Human-Computer Interaction

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2008
2008
2013
2013

Publication Types

Select...
2
1

Relationship

0
3

Authors

Journals

citations
Cited by 3 publications
(5 citation statements)
references
References 4 publications
0
5
0
Order By: Relevance
“…For example, it was reported that a multimodal interface enhanced the overall system and user performance by judiciously adopting multiple modalities, as introduced in the studies (Oviatt et al, 2004;Deng et al, 2004). Thus, in addition to speech, we incorporated a GUI and a "push-to-talk" modality into our system design.…”
Section: Distribution Of Meaning Transfer Rate In Successful or Unsucmentioning
confidence: 99%
See 2 more Smart Citations
“…For example, it was reported that a multimodal interface enhanced the overall system and user performance by judiciously adopting multiple modalities, as introduced in the studies (Oviatt et al, 2004;Deng et al, 2004). Thus, in addition to speech, we incorporated a GUI and a "push-to-talk" modality into our system design.…”
Section: Distribution Of Meaning Transfer Rate In Successful or Unsucmentioning
confidence: 99%
“…Notable user modeling work includes the design and evaluation of multimodal interfaces (Oviatt et al, 2004;Dybkjaer et al, 2004;Deng et al, 2004), analysis of user behaviors (Oviatt et al, 2004;Shin et al, 2002), probabilistic user models (Eckert et al, 1997;Zukerman and Albrech, 2001), utility-based models (Horvitz and Paek, 2001), knowledge-based models (Komatani et al, 2003), and user simulation (Levin et al, 2000;Eckert et al, 1997;Scheffler and Young, 2002). It should be noted that mediated interpersonal communication systems (e.g., S2S translation systems) have been used in a very limited way in this context.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…the recogniser is provided with the speech input to be transcribed, 2. the entire input is transcribed (probably with errors) and presented to the user, 3. the user starts making manual correction from the beginning, producing transcripts that will be (substantially) correct, 4. corrections to words that are missing from the system's vocabulary are dynamically added to the vocabulary, 5. corrections of mistakes other than out-of-vocabulary words are used for training the speech recogniser's spea-ker model, 6. the system re-transcribes the speech related to the parts that are not yet corrected by the user, 7. steps 3-6 are repeated until no more corrections are needed.…”
Section: Propagating User Correctionsmentioning
confidence: 99%
“…Transcription systems are wide and varied in their uses, which include dictation [23] as well as audio [8,7], multimedia [6,25,18] and meeting indexing applications [24,22]. However, since even the most sophisticated automatic speech recognition systems are far from perfect, transcriptions produced by their application often contain errors.…”
Section: Introductionmentioning
confidence: 99%