2021
DOI: 10.48550/arxiv.2112.05787
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Representation Learning for Conversational Data using Discourse Mutual Information Maximization

Abstract: m bsantraigi u gmail.com, p gmanish u microsoft.com, o pawang u cse.iitkgp.ac.in

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2022
2022
2022
2022

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…Such a measure is learned by retrieval models like Chen and Wang [2019], Henderson et al [2020]. Santra et al [2021] The main idea of CORAL loss is to optimize a measure of the compatibility between the context and a candidate response. For our implementation, we have used a response retrieval model for measuring this compatibility.…”
Section: The Coral Loss Functionmentioning
confidence: 99%
See 2 more Smart Citations
“…Such a measure is learned by retrieval models like Chen and Wang [2019], Henderson et al [2020]. Santra et al [2021] The main idea of CORAL loss is to optimize a measure of the compatibility between the context and a candidate response. For our implementation, we have used a response retrieval model for measuring this compatibility.…”
Section: The Coral Loss Functionmentioning
confidence: 99%
“…• Retrieval model There is also the choice of the retrieval model that is used for measuring R 3 . Here we have chosen ESIM [Chen and Wang, 2019], BERT [Devlin et al, 2019] and DMI [Santra et al, 2021] as the base models for training the response retrieval (i.e., R 3 reward) models.…”
Section: Hyperparameters Of Coralmentioning
confidence: 99%
See 1 more Smart Citation