Proceedings of the 11th International Conference on Natural Language Generation 2018
DOI: 10.18653/v1/w18-6552
|View full text |Cite
|
Sign up to set email alerts
|

Cyclegen: Cyclic consistency based product review generator from attributes

Abstract: In this paper we present an automatic review generator system which can generate personalized reviews based on the user identity, product identity and designated rating the user wishes to allot to the review. We combine this with a sentiment analysis system which performs the complimentary task of assigning ratings to reviews based purely on the textual content of the review. We introduce an additional loss term to ensure cyclic consistency of the sentiment rating of the generated review with the conditioning … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
5
0

Year Published

2019
2019
2022
2022

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 6 publications
(5 citation statements)
references
References 6 publications
0
5
0
Order By: Relevance
“…Models As baselines, we report performance of Attr2Seq (Dong et al, 2017) and Cyclegen (Sharma et al, 2018) which use embedding vectors to encode given attributes. For our models, we report performance of GEN using retrieved references by Coarse-and Fine-Grained REFLECT trained using SL and RL, denoted by GEN-C-F (SL) and GEN-C-F (RL) respectively 2 .…”
Section: Evaluation Resultsmentioning
confidence: 99%
See 2 more Smart Citations
“…Models As baselines, we report performance of Attr2Seq (Dong et al, 2017) and Cyclegen (Sharma et al, 2018) which use embedding vectors to encode given attributes. For our models, we report performance of GEN using retrieved references by Coarse-and Fine-Grained REFLECT trained using SL and RL, denoted by GEN-C-F (SL) and GEN-C-F (RL) respectively 2 .…”
Section: Evaluation Resultsmentioning
confidence: 99%
“…While review generation is essentially a subtask of D2T, it is relatively understudied than other D2T tasks. Previous models include an encoder-decoder model with attention (Dong et al, 2017), improved by including an objective function for rating accuracy (Sharma et al, 2018;Li and Tuzhilin, 2019), by introducing a hierarchical decoder (Zang and Wan, 2017), by decomposing the decoding stage as coarse-to-fine manner , and by using additional inputs such as user-given summary (Ni and McAuley, 2018) or product description (Li and Tuzhilin, 2019). In this paper, we make performance improvements by proposing a concept of leveraging references, and extensions proposed in the aforementioned literature are orthogonal and thus applicable to improve our models further.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…They also introduced an attention mechanism to syndicate comments and align words with input attributes. Sharma et al [51] used the model similar to [10] and added loss terms to generate more compliant comments. Ni et al [41] designed a review generation model that could make use of user and project information as well as auxiliary text input and aspect perception knowledge.…”
Section: B Personalized Review Generationmentioning
confidence: 99%
“…The output of the encoder is decoded by stacked multi-layer RNNs to generate reviews. Cyclegen [200] improves Att2Seq using an additional loss to force Att2Seq to generate reviews that better adhere to the sentiment rating by constraining the generation space.…”
Section: Generation-based Methodsmentioning
confidence: 99%