Proceedings of the 23rd Conference on Computational Natural Language Learning (CoNLL) 2019
DOI: 10.18653/v1/k19-1091
|View full text |Cite
|
Sign up to set email alerts
|

Learning to Detect Opinion Snippet for Aspect-Based Sentiment Analysis

Abstract: Aspect-based sentiment analysis (ABSA) is to predict the sentiment polarity towards a particular aspect in a sentence. Recently, this task has been widely addressed by the neural attention mechanism, which computes attention weights to softly select words for generating aspect-specific sentence representations. The attention is expected to concentrate on opinion words for accurate sentiment prediction. However, attention is prone to be distracted by noisy or misleading words, or opinion words from other aspect… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
2
1

Citation Types

0
5
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3
2

Relationship

0
10

Authors

Journals

citations
Cited by 15 publications
(5 citation statements)
references
References 24 publications
0
5
0
Order By: Relevance
“…Thus, introducing a context-aware word embedding 3 layer pre-trained on large-scale datasets with deep LSTM (McCann et al, 2017;Peters et al, 2018;Howard and Ruder, 2018) or Transformer (Radford et al, 2018(Radford et al, , 2019Devlin et al, 2019; and Conneau, 2019; Yang et al, 2019;Dong et al, 2019) for fine-tuning a lightweight task-specific network using the labeled data has good potential for further enhancing the performance. Hu et al (2019a) have conducted some initial attempts to couple the deep contextualized word embedding layer with downstream neural models for the original ABSA task and establish the new state-of-the-art results. It encourages us to explore the potential of using such contextualized embeddings to the more difficult but practical task, i.e.…”
Section: Introductionmentioning
confidence: 99%
“…Thus, introducing a context-aware word embedding 3 layer pre-trained on large-scale datasets with deep LSTM (McCann et al, 2017;Peters et al, 2018;Howard and Ruder, 2018) or Transformer (Radford et al, 2018(Radford et al, , 2019Devlin et al, 2019; and Conneau, 2019; Yang et al, 2019;Dong et al, 2019) for fine-tuning a lightweight task-specific network using the labeled data has good potential for further enhancing the performance. Hu et al (2019a) have conducted some initial attempts to couple the deep contextualized word embedding layer with downstream neural models for the original ABSA task and establish the new state-of-the-art results. It encourages us to explore the potential of using such contextualized embeddings to the more difficult but practical task, i.e.…”
Section: Introductionmentioning
confidence: 99%
“…Researchers have also tried to integrate attention mechanism with LSTMs to predict sentiment for target-aspect pairs (Ma, Peng, and Cambria 2018). With the recent success of BERT-based models, various papers have used BERT to generate contextualized embeddings for input sentences, which are then used to classify sentiment for target-aspect pairs (Huang and Carley 2019;Hu et al 2019). More recent papers have fine-tuned BERT for TABSA either by (i) constructing auxiliary sentences with different pairs of targets and aspects or (ii) modifying the top-most classification layer to also take in targets and aspects (Rietzler et al 2020;Sun, Huang, and Qiu 2019;.…”
Section: Targeted Aspect-based Sentiment Analysismentioning
confidence: 99%
“…Most recently, the authors have used transfer-learning-based models. BERT has been used in various papers [27,28] to produce contextualized embeddings for input sentences, which were subsequently used to identify the sentiment for target-aspect pairs. The authors in [29,30] used BERT as the embedding layer, while the authors in [31] used a finetuning approach for BERT, with an additional layer acting as the classification layer.…”
Section: Related Workmentioning
confidence: 99%