2015 IEEE Workshop on Automatic Speech Recognition and Understanding (ASRU) 2015
DOI: 10.1109/asru.2015.7404865
|View full text |Cite
|
Sign up to set email alerts
|

Multi-domain dialogue success classifiers for policy training

Abstract: We propose a method for constructing dialogue success classifiers that are capable of making accurate predictions in domains unseen during training. Pooling and adaptation are also investigated for constructing multi-domain models when data is available in the new domain. This is achieved by reformulating the features input to the recurrent neural network models introduced in [1]. Importantly, on our task of main interest, this enables policy training in a new domain without the dialogue success classifier (wh… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
23
0

Year Published

2016
2016
2019
2019

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 15 publications
(24 citation statements)
references
References 30 publications
1
23
0
Order By: Relevance
“…It should be noted here that [20] followed a different approach, utilising a simulated user and allowing their model access to turn-by-turn returns; this type of feedback is not possible in our case since we are using un-annotated spoken dialogue data. Regardless of this, [20] propose three feature sets containing information about the user's dialogue act, the system's dialogue act, current turn number and belief state information. The major difference among the three sets lies in the belief state information: a) F is defined as the full belief, b) F28 contains no belief state information, and c) F74 contains the entropy of each slot in the belief state.…”
Section: Input Featuresmentioning
confidence: 99%
See 3 more Smart Citations
“…It should be noted here that [20] followed a different approach, utilising a simulated user and allowing their model access to turn-by-turn returns; this type of feedback is not possible in our case since we are using un-annotated spoken dialogue data. Regardless of this, [20] propose three feature sets containing information about the user's dialogue act, the system's dialogue act, current turn number and belief state information. The major difference among the three sets lies in the belief state information: a) F is defined as the full belief, b) F28 contains no belief state information, and c) F74 contains the entropy of each slot in the belief state.…”
Section: Input Featuresmentioning
confidence: 99%
“…We here briefly describe the features proposed in [20] that we used as a benchmark for our system. It should be noted here that [20] followed a different approach, utilising a simulated user and allowing their model access to turn-by-turn returns; this type of feedback is not possible in our case since we are using un-annotated spoken dialogue data.…”
Section: Input Featuresmentioning
confidence: 99%
See 2 more Smart Citations
“…The SDS used to collect this corpus was produced by VocalIQ 7 (Mrkšić et al, 2015). Users were again recruited via AMT, but interacted with this SDS via microphone using the Chrome browser.…”
Section: Corpora Creationmentioning
confidence: 99%