2021 IEEE Spoken Language Technology Workshop (SLT) 2021
DOI: 10.1109/slt48900.2021.9383584
|View full text |Cite
|
Sign up to set email alerts
|

Large-Context Conversational Representation Learning: Self-Supervised Learning For Conversational Documents

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
8
0

Year Published

2021
2021
2021
2021

Publication Types

Select...
2

Relationship

1
1

Authors

Journals

citations
Cited by 2 publications
(8 citation statements)
references
References 26 publications
0
8
0
Order By: Relevance
“…Utterance-level dialogue sequence labeling is being used for topic segmentation [7][8][9], dialogue act estimation [10][11][12][13][14][15], and call scene segmentation [16][17][18]. Hierarchically structured models consisting of utterance-level and dialogue-level neural networks are often used to efficiently capture contexts within an utterance and between utterances, and an effective self-supervised pretraining method has been proposed [18]. If a hierarchical model is used for dialogue sequence labeling, a large number of parameters are needed to train a model that offers high accuracy.…”
Section: Utterance-level Dialogue Sequence Labelingmentioning
confidence: 99%
See 4 more Smart Citations
“…Utterance-level dialogue sequence labeling is being used for topic segmentation [7][8][9], dialogue act estimation [10][11][12][13][14][15], and call scene segmentation [16][17][18]. Hierarchically structured models consisting of utterance-level and dialogue-level neural networks are often used to efficiently capture contexts within an utterance and between utterances, and an effective self-supervised pretraining method has been proposed [18]. If a hierarchical model is used for dialogue sequence labeling, a large number of parameters are needed to train a model that offers high accuracy.…”
Section: Utterance-level Dialogue Sequence Labelingmentioning
confidence: 99%
“…When self-supervised pretraining [18] is utilized, parameters {θ w , θ r , θ s , θ u } are initialized by pretraining using unlabeled data, and then parameters Θ are optimized with L HT in the same way as above.…”
Section: Utterance-level Dialogue Sequence Labelingmentioning
confidence: 99%
See 3 more Smart Citations