Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing 2022
DOI: 10.18653/v1/2022.emnlp-main.538
|View full text |Cite
|
Sign up to set email alerts
|

Improving Aspect Sentiment Quad Prediction via Template-Order Data Augmentation

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…We compare the proposed method with several classification-based aspect-based sentiment analysis models, including, DP (Qiu et al, 2011), JET , TAS-BERT (Wan et al, 2020) and Extract-Classify (Cai et al, 2021). In addition, generative models are also compared, such as BARTABSA (Yan et al, 2021), GAS (Zhang et al, 2021b), Paraphrase (Zhang et al, 2021a),TODA (Hu et al, 2022), Seq2Path (Mao et al, 2022) and OTG (Bao et al, 2022). 4 .…”
Section: Resultsmentioning
confidence: 99%
“…We compare the proposed method with several classification-based aspect-based sentiment analysis models, including, DP (Qiu et al, 2011), JET , TAS-BERT (Wan et al, 2020) and Extract-Classify (Cai et al, 2021). In addition, generative models are also compared, such as BARTABSA (Yan et al, 2021), GAS (Zhang et al, 2021b), Paraphrase (Zhang et al, 2021a),TODA (Hu et al, 2022), Seq2Path (Mao et al, 2022) and OTG (Bao et al, 2022). 4 .…”
Section: Resultsmentioning
confidence: 99%
“…Paraphrase [30]: Another generative approach that paraphrases the original sentence to better utilize the semantic knowledge in pre-trained language models. DLO [31]: Building on [30], it was found that combining multiple templates could improve ASQP tasks through data augmentation.…”
Section: Generative Methodsmentioning
confidence: 99%
“…They transformed the quadruple prediction task into a text generation problem by combining annotated sentiment elements with pre-established templates and using the resulting natural language sentences as target sequences, addressing it through a Seq2Seq modeling paradigm. Following this, Hu et al [31] built on Zhang et al [30], noting the influence of the sentiment elements' order in templates on quadruple extraction performance, proposing the combination of multiple templates to improve the ASQP task through data augmentation. Differing from previous studies, we propose a tree structure information-aware prompt tuning method, aiming to jointly detect all sentiment elements in a given review sentence's tree.…”
Section: Related Workmentioning
confidence: 99%