Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Confer 2021
DOI: 10.18653/v1/2021.acl-long.367
|View full text |Cite
|
Sign up to set email alerts
|

Learning Span-Level Interactions for Aspect Sentiment Triplet Extraction

Abstract: Aspect Sentiment Triplet Extraction (ASTE)is the most recent subtask of ABSA which outputs triplets of an aspect target, its associated sentiment, and the corresponding opinion term. Recent models perform the triplet extraction in an end-to-end manner but heavily rely on the interactions between each target word and opinion word. Thereby, they cannot perform well on targets and opinions which contain multiple words. Our proposed span-level approach explicitly considers the interaction between the whole spans o… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
26
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
5
3

Relationship

2
6

Authors

Journals

citations
Cited by 79 publications
(50 citation statements)
references
References 41 publications
0
26
0
Order By: Relevance
“…To further explore this task, (Mao et al, 2021;Chen et al, 2021a) transformed ASTE to a machine reading comprehension problem and utilized the shared BERT encoder to obatin the triplets after multiple stages decoding. Another line of research focuses on designing a new tagging scheme that makes the model can extract the triplets in an endto-end fashion Wu et al, 2020a;Xu et al, 2021;Yan et al, 2021). For instance, proposed a positionaware tagging scheme, which solves the limitations related to existing works by enriching the expressiveness of labels.…”
Section: Related Workmentioning
confidence: 99%
“…To further explore this task, (Mao et al, 2021;Chen et al, 2021a) transformed ASTE to a machine reading comprehension problem and utilized the shared BERT encoder to obatin the triplets after multiple stages decoding. Another line of research focuses on designing a new tagging scheme that makes the model can extract the triplets in an endto-end fashion Wu et al, 2020a;Xu et al, 2021;Yan et al, 2021). For instance, proposed a positionaware tagging scheme, which solves the limitations related to existing works by enriching the expressiveness of labels.…”
Section: Related Workmentioning
confidence: 99%
“…TwoStage [12], JET [91], GTS [75], OTE-MTL [92], BMRC [93] Dual-MRC [94], GAS [95], Gen-ABSA [96], Span-ASTE [97], NAG-ASTE [98], PASTE [99] Aspect-Category-Sentiment Detection (ACSD) TAS-BERT [46], MEJD [100], GAS [95], Paraphrase [7] Quad Extraction Aspect Sentiment Quad Prediction (ASQP) Extract-Classify-ACOS [101], Paraphrase [7] Fig. 2.…”
Section: Aspect Sentiment Triplet Extraction (Aste)mentioning
confidence: 99%
“…Since those methods rely on the interactions between word pairs, they may not perform well when the aspect terms or the opinion terms are multi-word expressions. Motivated by this observation, Xu et al [97] propose a span-level interaction model which explicitly considers the interactions between the whole spans of aspects and those of opinions to improve the performance.…”
Section: Aspect Sentiment Triplet Extraction (Aste)mentioning
confidence: 99%
“…Yet another line (Wu et al 2020a) tries to solve Triplet by labeling the relations of word-pairs. Xu, Chia, and Bing (2021) goes further by labeling span-pairs with a pruning method. Moreover, Mao et al (2021); Yan et al (2021) propose to combine a set of subtasks through unified formulations.…”
Section: Aspect-based Sentiment Analysismentioning
confidence: 99%
“…Separating tasks into multiple stages potentially breaks the relation modeling within the triplets and brings about ineligible error propagation. Recently, several neural-network-based models (Wu et al 2020a;Xu et al 2020;Xu, Chia, and Bing 2021) have been proposed to develop end-to-end framework with sequence tagging. In these approaches, aspects and opinions can be jointly extracted and polarities are also jointly considered.…”
Section: Introductionmentioning
confidence: 99%