Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 2: Short Papers) 2017
DOI: 10.18653/v1/p17-2083
|View full text |Cite
|
Sign up to set email alerts
|

A Generative Attentional Neural Network Model for Dialogue Act Classification

Abstract: We propose a novel generative neural network architecture for Dialogue Act classification. Building upon the Recurrent Neural Network framework, our model incorporates a new attentional technique and a label-to-label connection for sequence learning, akin to Hidden Markov Models. Our experiments show that both of these innovations enable our model to outperform strong baselines for dialogue-act classification on the MapTask and Switchboard corpora. In addition, we analyse empirically the effectiveness of each … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0
1

Year Published

2018
2018
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 21 publications
(13 citation statements)
references
References 10 publications
(13 reference statements)
0
12
0
1
Order By: Relevance
“…• ADC [47] and PDI [45] methds are acoustic and discourse classification based on HMM and SVM. • GAN [48] method is a generative neural network which incorporates attention technique and a label-to-label connection. Table III and Table IV…”
Section: Results and Analysismentioning
confidence: 99%
“…• ADC [47] and PDI [45] methds are acoustic and discourse classification based on HMM and SVM. • GAN [48] method is a generative neural network which incorporates attention technique and a label-to-label connection. Table III and Table IV…”
Section: Results and Analysismentioning
confidence: 99%
“…Certainly, both paradigms are motivated by the sound reasoning that the constituent words, and their order within the sentence, are both key to interpreting its meaning, and hence both have been extensively explored within the literature. For example, Ahmadvand et al (2019), Liu and Lane (2017), Ortega and Vu (2017), Rojas-Barahona et al (2016), and Kalchbrenner and Blunsom (2013), all use variations of convolutional models as sentence encoders, while Li et al (2018), Papalampidi et al (2017), Tran et al (2017a) and Cerisara et al (2017), all employed recurrent architectures. Lee and Dernoncourt (2016) experimented with both convolutional and recurrent sentence encoders on several different corpora and found that neither approach was superior in all cases.…”
Section: Supervised Encodersmentioning
confidence: 99%
“…The authors do not define any training and test data split for the Maptask corpus; we randomly split the 128 dialogues into 3 parts. The training set comprises 80% of the dialogues (102), and the test and validation sets 10% each (13), which is similar to proportions used in previous studies (Tran et al 2017a;Tran et al 2017b).…”
Section: Maptaskmentioning
confidence: 99%
“…Lai et al (2019) introduce the Gated Self-Attention Memory Network (GSAMN). It combines gated attention (Dhingra et al, 2017;Tran et al, 2017), memory networks (Sukhbaatar et al, 2015) and self-attention (Vaswani et al, 2017) in one model. The authors use transfer learning with their Stack Exchange QA dataset.…”
Section: Related Workmentioning
confidence: 99%