Our system is currently under heavy load due to increased usage. We're actively working on upgrades to improve performance. Thank you for your patience.
2015
DOI: 10.48550/arxiv.1506.06714
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

A Neural Network Approach to Context-Sensitive Generation of Conversational Responses

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
135
0

Year Published

2018
2018
2023
2023

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 88 publications
(135 citation statements)
references
References 0 publications
0
135
0
Order By: Relevance
“…Our work is also closely related to recent successes in applying language models to dialog modeling (e.g., [25,26,17,18]), which built on earlier research in neural dialog modeling (e.g., [14,15,16,27,28]). One of our fine-tuning stages requires training on dialog-only data, which is related to Wolf et al [29], Dinan et al [25] and Zhang et al [30].…”
Section: Related Workmentioning
confidence: 82%
See 3 more Smart Citations
“…Our work is also closely related to recent successes in applying language models to dialog modeling (e.g., [25,26,17,18]), which built on earlier research in neural dialog modeling (e.g., [14,15,16,27,28]). One of our fine-tuning stages requires training on dialog-only data, which is related to Wolf et al [29], Dinan et al [25] and Zhang et al [30].…”
Section: Related Workmentioning
confidence: 82%
“…Our approach is inspired by Adiwardana et al [17], who argued for human-like metrics, such as sensibleness and specificity. Many automated metrics for dialog models have been studied, including perplexity [16,17], F1, Hits@1/N [25], USR [46], or BLEU/ROUGE [47,15,27]. However, such automated metrics may not correlate well with human judgment [48].…”
Section: Related Workmentioning
confidence: 99%
See 2 more Smart Citations
“…The OpenSubtitle dataset is a large-scale dataset contains a total number of 3.35G sentence fragments extracted from the OpenSubtitle website, while the Cornell Movie-Dialogs Corpus contains a collection of movie conversations extracted from raw movie scripts. For simulating the social conversation, there are PersonaChat (Zhang et al, 2018) and Twitter Triple Corpus (Sordoni et al, 2015). The Twitter Triple Corpus consists of 4,232 Twitter conversation triples evaluated from 33K candidate triples by human raters.…”
Section: Introductionmentioning
confidence: 99%