Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics 2020
DOI: 10.18653/v1/2020.acl-main.638
|View full text |Cite
|
Sign up to set email alerts
|

Multi-Domain Dialogue Acts and Response Co-Generation

Abstract: Generating fluent and informative responses is of critical importance for task-oriented dialogue systems. Existing pipeline approaches generally predict multiple dialogue acts first and use them to assist response generation. There are at least two shortcomings with such approaches. First, the inherent structures of multi-domain dialogue acts are neglected. Second, the semantic associations between acts and responses are not taken into account for response generation. To address these issues, we propose a neur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
31
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 46 publications
(38 citation statements)
references
References 23 publications
(31 reference statements)
0
31
0
Order By: Relevance
“…Using this method they train a domain-aware multi-decoder (DAMD) network for predicting belief, action and response, jointly. As each agent response may cover multiple domains, acts or slots at the same time, Marco (Wang et al, 2020) learns to generate the response by attending over the predicted dialog act sequence at every step of decoding. SimpleTOD (Hosseini-Asl et al, 2020) and SOLOIST (Peng et al, 2020a) are both based on the GPT-2 (Radford et al, 2019) architecture.…”
Section: Experimenal Frameworkmentioning
confidence: 99%
See 2 more Smart Citations
“…Using this method they train a domain-aware multi-decoder (DAMD) network for predicting belief, action and response, jointly. As each agent response may cover multiple domains, acts or slots at the same time, Marco (Wang et al, 2020) learns to generate the response by attending over the predicted dialog act sequence at every step of decoding. SimpleTOD (Hosseini-Asl et al, 2020) and SOLOIST (Peng et al, 2020a) are both based on the GPT-2 (Radford et al, 2019) architecture.…”
Section: Experimenal Frameworkmentioning
confidence: 99%
“…Past research and user-studies have also shown that hierarchy is an important aspect of human conversation (Jurafsky, 2000). But, most previous works based on transformer have focused on training models either as language models (Budzianowski and Vulić, 2019; or as standard (non-hierarchical) Seq2Seq models Zhang et al, 2020a;Wang et al, 2020) with certain task specific extensions. Although arguably, the self-attention mechanism might automatically learn such a scheme during the training process, our empirical results show that forcing this inductive bias by manual design as proposed here leads to better performing models.…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation
“…• MarCo (Wang et al, 2020b) extends the idea of HDSA and considers a hierarchical dialog act. The difference is that it co-generates the dialog act sequence and the response jointly.…”
Section: Baseline and State-of-the-art Modelsmentioning
confidence: 99%
“…In this method, reinforcement learning is performed only on those discrete latent variables, thus the policy optimization is achieved without affecting the language generation. However, LaRL depends on a single vector from the beginning to the end during the response generation, even though a response may often contain more than one dialog act and contents (Wang et al, 2020b). Due to this, a static, global vector tends to be an entangled representation of multiple dialog acts, sentence structure, and contents.…”
Section: Introductionmentioning
confidence: 99%