2002
DOI: 10.1613/jair.971
|View full text |Cite
|
Sign up to set email alerts
|

Automatically Training a Problematic Dialogue Predictor for a Spoken Dialogue System

Abstract: Spoken dialogue systems promise efficient and natural access to a large variety of information sources and services from any phone. However, current spoken dialogue systems are deficient in their strategies for preventing, identifying and repairing problems that arise in the conversation. This paper reports results on automatically training a Problematic Dialogue Predictor to predict problematic human-computer dialogues using a corpus of 4692 dialogues collected with the 'How May I Help Yo… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
30
0

Year Published

2006
2006
2017
2017

Publication Types

Select...
4
4
2

Relationship

0
10

Authors

Journals

citations
Cited by 54 publications
(31 citation statements)
references
References 21 publications
0
30
0
Order By: Relevance
“…The WOz setup used in this experiment, which replaced ASR and NLU components with a human wizard, also eliminated the majority of error recovery and clarification dialogues which characterise end-to-end systems (Walker et al 2002;McTear et al 2005;Litman et al 2006). Therefore, we need to ask whether the interactions collected in this corpus are realistic.…”
Section: Discussionmentioning
confidence: 99%
“…The WOz setup used in this experiment, which replaced ASR and NLU components with a human wizard, also eliminated the majority of error recovery and clarification dialogues which characterise end-to-end systems (Walker et al 2002;McTear et al 2005;Litman et al 2006). Therefore, we need to ask whether the interactions collected in this corpus are realistic.…”
Section: Discussionmentioning
confidence: 99%
“…Hastie et al(2002) predicted problematic dialogues from a series of DARPA Communicator dialogues according to user satisfaction rates, task completion predictors and some interaction based features. Walker et al (2002) presented their prediction model on the basis of information the system collected early in the dialogue and in real time. Oulasvirta et al (2006) reported relations between users' satisfaction rates among the goal-level, concept-level, task-level and command-level, and captured a number of qualified user features.…”
Section: Dialog-level Vs Utterance-levelmentioning
confidence: 99%
“…In [29], like in many other investigations, a subset of AT&T's How-May-I-help-You (HMIHY) database [11] is used. This study differs from other work in that it does not only address the classification of a single utterance, but the annotation of entire dialogs.…”
Section: Classification Of Emotionmentioning
confidence: 99%