2021
DOI: 10.1017/s1351324921000310
|View full text |Cite
|
Sign up to set email alerts
|

Sentence encoding for Dialogue Act classification

Abstract: In this study, we investigate the process of generating single-sentence representations for the purpose of Dialogue Act (DA) classification, including several aspects of text pre-processing and input representation which are often overlooked or underreported within the literature, for example, the number of words to keep in the vocabulary or input sequences. We assess each of these with respect to two DA-labelled corpora, using a range of supervised models, which represent those most frequently applied to the … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
7
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
2
2
1

Relationship

0
5

Authors

Journals

citations
Cited by 7 publications
(7 citation statements)
references
References 61 publications
(230 reference statements)
0
7
0
Order By: Relevance
“…utilize attention mechanisms and time seriesLSTM for act recognition. Duran et al [14].during the encoding phase, integrate act and sentence encoding using BERT, systematically comparing and validating various encoding mechanisms' performance differences in act classification. Malhotra et al [15].…”
Section: Based On Sequence Labelingmentioning
confidence: 99%
“…utilize attention mechanisms and time seriesLSTM for act recognition. Duran et al [14].during the encoding phase, integrate act and sentence encoding using BERT, systematically comparing and validating various encoding mechanisms' performance differences in act classification. Malhotra et al [15].…”
Section: Based On Sequence Labelingmentioning
confidence: 99%
“…Context is either modeled through a condensed vector of contextual cues (e.g., Chen et al, 2018;Colombo et al, 2020;Kumar et al, 2018;Raheja & Tetreault, 2019) or directly in the neural network structure with a node for each utterance (e.g., Cerisara et al, 2018;Kalchbrenner & Blunsom, 2013;Ortega & Vu, 2017;Ribeiro et al, 2019b). Some studies that used deep learning for dialog act classification did not include contextual cues (e.g., Duran & Battle, 2018;Duran et al, 2023;Khanpour et al, 2016;Ribeiro et al, 2019a), but most encoded context through using a condensed surface encoding of the previous utterances (e.g., Chen et al, 2018;Kumar et al, 2018;Yano et al, 2021;Zhao & Kawahara, 2019). Most deep learning models have regarded dialog act classification as classifying a sequence of dialog acts, without paying attention to the speaker or the structure of utterances into turns.…”
Section: Approaches To Dialog Act Classificationmentioning
confidence: 99%
“…Similarly, many different contextual aspects, ranging from predicted speaker intentions of previous utterances to cues on who is or was the speaker of the current or previous utterances, have been successfully used. But while all studies make use of one or more surface cues, not all studies make use of contextual cues (e.g., Ang et al., 2005; Duran & Battle, 2018; Duran et al., 2023; Novielli & Strapparava, 2009).…”
Section: Identifying Cues In Existing Dialog Act Classification Studiesmentioning
confidence: 99%
See 1 more Smart Citation
“…Most existing studies on dialogue classification focus on sentence-level or utterancelevel intent recognition of user statements [10]. These studies commonly employ hierarchical neural networks to model the sequential and structural information within words, characters, and utterances [11].…”
Section: Dialogue Classificationmentioning
confidence: 99%