Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.538
|View full text |Cite
|
Sign up to set email alerts
|

Continuity of Topic, Interaction, and Query: Learning to Quote in Online Conversations

Abstract: Quotations are crucial for successful explanations and persuasions in interpersonal communications. However, finding what to quote in a conversation is challenging for both humans and machines. This work studies automatic quotation generation in an online conversation and explores how language consistency affects whether a quotation fits the given context. Here, we capture the contextual consistency of a quotation in terms of latent topics, interactions with the dialogue history, and coherence to the query tur… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2021
2021
2022
2022

Publication Types

Select...
4
2

Relationship

1
5

Authors

Journals

citations
Cited by 10 publications
(11 citation statements)
references
References 28 publications
0
11
0
Order By: Relevance
“…Datasets. We conduct experiments based on datasets from two different platforms, Weibo and Reddit, released by Wang et al (2020). To make our experimental results comparable to Wang et al (2020), we utilize their preprocessed data directly.…”
Section: Methodsmentioning
confidence: 99%
See 1 more Smart Citation
“…Datasets. We conduct experiments based on datasets from two different platforms, Weibo and Reddit, released by Wang et al (2020). To make our experimental results comparable to Wang et al (2020), we utilize their preprocessed data directly.…”
Section: Methodsmentioning
confidence: 99%
“…4) CTIQ. The SOTA model (Wang et al, 2020), which employs an encoder-decoder framework enhanced by Neural Topic Model to continue the context with a quotation via language generation. 5) BERT.…”
Section: Modelsmentioning
confidence: 99%
“…Figure 1 shows two examples of narratives for a proverb from our dataset, along with corresponding alignment annotations. We diverge from related extant resources (Wang et al, 2020;Tan et al, 2015Tan et al, , 2016 on using proverbs in terms of quality of narratives, direct supervision and having fine-grained alignment annotations. 2 We explore three tasks: (1) proverb retrieval ( § 5.1) and alignment prediction, (2) narrative generation for a given proverb and a set of keywords specifying a topic ( § 5.2), and (3) discovering narratives with similar motifs ( § 5.3).…”
Section: Narrative (N2) Narrative (N1) Proverb (P)mentioning
confidence: 95%
“…Prior work has explored recommending Chinese idioms as context-based recommendation (Liu et al, 2019b) or as cloze-style reading comprehension tasks (Zheng et al, 2019). Learning to quote has been explored based on fiction Tan et al (2015Tan et al ( , 2016 and noisy social media conversations from Twitter, Reddit or Weibo (Lee et al, 2016;Wang et al, 2020). In the most related prior work, authors explore a quote retrieval task borrowing inspiration from context based recommendation systems (Huang et al, 2012;He et al, 2010).…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation