2020
DOI: 10.1016/j.csl.2020.101072
|View full text |Cite
|
Sign up to set email alerts
|

Sequential neural networks for noetic end-to-end response selection

Abstract: The noetic end-to-end response selection challenge as one track in the 7th Dialog System Technology Challenges (DSTC7) aims to push the state of the art of utterance classification for real world goal-oriented dialog systems, for which participants need to select the correct next utterances from a set of candidates for the multi-turn context. This paper presents our systems that are ranked top 1 on both datasets under this challenge, one focused and small (Advising) and the other more diverse and large (Ubuntu… Show more

Help me understand this report
View preprint versions

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
22
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
3
3
2

Relationship

0
8

Authors

Journals

citations
Cited by 20 publications
(22 citation statements)
references
References 26 publications
(65 reference statements)
0
22
0
Order By: Relevance
“…ally have three modules: encoding, matching and aggregation (Lowe et al, 2015;Zhou et al, 2016;Wu et al, 2017;Zhou et al, 2018b;Zhang et al, 2018b;Chen and Wang, 2019;Feng et al, 2019;Yuan et al, 2019). The encoding module encodes text into vector representations using encoders such as LSTM, Transformer, or BERT.…”
Section: Persona-based Conversational Modelsmentioning
confidence: 99%
“…ally have three modules: encoding, matching and aggregation (Lowe et al, 2015;Zhou et al, 2016;Wu et al, 2017;Zhou et al, 2018b;Zhang et al, 2018b;Chen and Wang, 2019;Feng et al, 2019;Yuan et al, 2019). The encoding module encodes text into vector representations using encoders such as LSTM, Transformer, or BERT.…”
Section: Persona-based Conversational Modelsmentioning
confidence: 99%
“…Advising-3 DailyDialog Train Data MRR R@1 R@10 MAP R@1 R@10 MAP R@1 R@10 Oracle ESIM (Chen and Wang, 2019) 0…”
Section: Datasetsmentioning
confidence: 99%
“…Besides interaction representation for utterances, we consider the response as a part of the context, and then encode the response with utterances. We first concatenate all utterances as the context [20], i.e. C = [U 1 , .…”
Section: Matching Aggregationmentioning
confidence: 99%
“…After generating the interaction representations U k and R, we further enhance the matching information by calculating the difference, subtraction [21], [22] and multiplication [20], [23], [24] with self-aggregated representations U k and R. The various matching information of utterances and response are fused into the final aggregated representation using the matching aggregation mechanism, which is formulated as:…”
Section: Matching Aggregationmentioning
confidence: 99%