Proceedings of the 55th Annual Meeting of the Association For Computational Linguistics (Volume 1: Long Papers) 2017
DOI: 10.18653/v1/p17-1122
|View full text |Cite
|
Sign up to set email alerts
|

Data-Driven Broad-Coverage Grammars for Opinionated Natural Language Generation (ONLG)

Abstract: Opinionated natural language generation (ONLG) is a new, challenging, NLG task in which we aim to automatically generate human-like, subjective, responses to opinionated articles online. We present a data-driven architecture for ONLG that generates subjective responses triggered by users' agendas, based on automatically acquired wide-coverage generative grammars. We compare three types of grammatical representations that we design for ONLG. The grammars interleave different layers of linguistic information, an… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
4
1

Citation Types

0
5
0

Year Published

2018
2018
2021
2021

Publication Types

Select...
3
2

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 22 publications
0
5
0
Order By: Relevance
“…For example, for a sad story, someone may respond with sympathy (as a friend), someone may feel angry (as an irritable stranger), yet someone else may be happy (as an enemy). Flexible emotion interactions between a post and a response are an important difference from the previous studies (Hu et al 2017;Ghosh et al 2017;Cagan, Frank, and Tsarfaty 2017), which use the same emotion or sentiment for response as that in the input post.…”
Section: Task Definition and Overviewmentioning
confidence: 85%
See 1 more Smart Citation
“…For example, for a sad story, someone may respond with sympathy (as a friend), someone may feel angry (as an irritable stranger), yet someone else may be happy (as an enemy). Flexible emotion interactions between a post and a response are an important difference from the previous studies (Hu et al 2017;Ghosh et al 2017;Cagan, Frank, and Tsarfaty 2017), which use the same emotion or sentiment for response as that in the input post.…”
Section: Task Definition and Overviewmentioning
confidence: 85%
“…Affect Language Model was proposed in (Ghosh et al 2017) to generate text conditioned on context words and affect categories. (Cagan, Frank, and Tsarfaty 2017) incorporated the grammar information to generate comments for a document using sentiment and topics. Our work is different in two main aspects: 1) prior studies are heavily dependent on linguistic tools or customized parameters in text generation, while our model is fully datadriven without any manual adjustment; 2) prior studies are unable to model multiple emotion interactions between the input post and the response, instead, the generated text simply continues the emotion of the leading context.…”
Section: Related Workmentioning
confidence: 99%
“…For psycholinguistics, Barr et al (2013) demonstrate how generalizability of results is negatively impacted by ignoring grouping factors in the anal-ysis. Mixed effect models have found use in NLP before (Green et al, 2014;Cagan et al, 2017;Karimova et al, 2018;Kreutzer et al, 2020), but to the best of our knowledge they have not been used in summary evaluation.…”
Section: Related Workmentioning
confidence: 99%
“…Another closely related line of research is sentimentcontrollable text generation (Hu et al 2017;Cagan, Frank, and Tsarfaty 2017;Wang and Wan 2018;Li et al 2020). Similar to them, our work also aims to generate text with controllable sentiments.…”
Section: Related Workmentioning
confidence: 95%