Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP) 2020
DOI: 10.18653/v1/2020.emnlp-main.451
|View full text |Cite
|
Sign up to set email alerts
|

Inducing Target-Specific Latent Structures for Aspect Sentiment Classification

Abstract: Aspect-level sentiment analysis aims to recognize the sentiment polarity of an aspect or a target in a comment. Recently, graph convolutional networks based on linguistic dependency trees have been studied for this task. However, the dependency parsing accuracy of commercial product comments or tweets might be unsatisfactory. To tackle this problem, we associate linguistic dependency trees with automatically induced aspectspecific graphs. We propose gating mechanisms to dynamically combine information from wor… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
44
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
2

Relationship

0
7

Authors

Journals

citations
Cited by 95 publications
(61 citation statements)
references
References 37 publications
0
44
0
Order By: Relevance
“…For ALSC task, we evaluate our model on five datasets 1 , whose statistics are listed in Table 1, including (i) Laptop14 (SemEval-2014T4) Table 4: ALSC results on the five datasets. †Numbers are from Dai et al (2021), and others are from the original papers, i.e., ASGCN (Zhang et al, 2019a), CDT (Sun et al, 2019b), BiGCN (Zhang and Qian, 2020), DGEDT (Tang et al, 2020), RGAT , kumaGCN (Chen et al, 2020), PWCN-FT-RoBERTa . (Pontiki et al, 2014) with laptop reviews, (ii) Rest14 (SemEval-2014T4), Rest15 (SemEval-2015T12) and Rest16 (SemEval-2016T5) (Pontiki et al, 2014(Pontiki et al, , 2015(Pontiki et al, , 2016 with restaurant reviews, and (iii) Twitter (Mitchell et al, 2013) with tweets.…”
Section: Methodsmentioning
confidence: 99%
See 2 more Smart Citations
“…For ALSC task, we evaluate our model on five datasets 1 , whose statistics are listed in Table 1, including (i) Laptop14 (SemEval-2014T4) Table 4: ALSC results on the five datasets. †Numbers are from Dai et al (2021), and others are from the original papers, i.e., ASGCN (Zhang et al, 2019a), CDT (Sun et al, 2019b), BiGCN (Zhang and Qian, 2020), DGEDT (Tang et al, 2020), RGAT , kumaGCN (Chen et al, 2020), PWCN-FT-RoBERTa . (Pontiki et al, 2014) with laptop reviews, (ii) Rest14 (SemEval-2014T4), Rest15 (SemEval-2015T12) and Rest16 (SemEval-2016T5) (Pontiki et al, 2014(Pontiki et al, , 2015(Pontiki et al, , 2016 with restaurant reviews, and (iii) Twitter (Mitchell et al, 2013) with tweets.…”
Section: Methodsmentioning
confidence: 99%
“…Despite their effectiveness, they lack interpretability to the sentiment prediction. For further boosted, many methods (Sun et al, 2019a;Chen et al, 2020;Mao et al, 2021;Dai et al, 2021) introduce the pre-trained Transformer, which also brings a little interpretability due to the highlight modification between aspect and opinion terms derived from potential syntax knowledge (Wu et al, 2020;Dai et al, 2021). However, the derived interpretability is far from human-level.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…Most recently, several works explore the idea of combining different types of graph for ABSA task. For instance, (Chen et al, 2020) combined a dependency graph and a latent graph to generate the aspect representation. (Zhang and Qian, 2020) observed the characteristics of word co-occurrence in linguistics and designed hierarchical syntactic and lexical graphs.…”
Section: Related Workmentioning
confidence: 99%
“…More recent efforts (Zhang et al, 2019;Sun et al, 2019b;Huang and Carley, 2019;Zhang and Qian, 2020;Chen et al, 2020;Liang et al, 2020;Wang et al, 2020;Tang et al, 2020) have been de-voted to graph convolutional networks (GCNs) and graph attention networks (GATs) over dependency trees, which explicitly exploit the syntactic structure of a sentence. Consider the dependency tree in Figure 1; the syntactic dependency can establish connections between the words in a sentence.…”
Section: Introductionmentioning
confidence: 99%