2021
DOI: 10.3390/electronics10182195
|View full text |Cite
|
Sign up to set email alerts
|

Explainable Sentiment Analysis: A Hierarchical Transformer-Based Extractive Summarization Approach

Abstract: In recent years, the explainable artificial intelligence (XAI) paradigm is gaining wide research interest. The natural language processing (NLP) community is also approaching the shift of paradigm: building a suite of models that provide an explanation of the decision on some main task, without affecting the performances. It is not an easy job for sure, especially when very poorly interpretable models are involved, like the almost ubiquitous (at least in the NLP literature of the last years) transformers. Here… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
5
1

Relationship

0
6

Authors

Journals

citations
Cited by 16 publications
(12 citation statements)
references
References 35 publications
0
12
0
Order By: Relevance
“…www.ijacsa.thesai.org In 2021, [11] conducted an Aspect-based SA study on consumers product reviews data. They have proposed two BERT models for aspect extraction, sentiment classification using parallel and hierarchical aggregation methods based on hierarchical transformer model [12]. The following Fig.…”
Section: Related Workmentioning
confidence: 99%
See 1 more Smart Citation
“…www.ijacsa.thesai.org In 2021, [11] conducted an Aspect-based SA study on consumers product reviews data. They have proposed two BERT models for aspect extraction, sentiment classification using parallel and hierarchical aggregation methods based on hierarchical transformer model [12]. The following Fig.…”
Section: Related Workmentioning
confidence: 99%
“…The BERT BASE and BERT LARGE are two original models. The base model consists of 12 Encoders and bidirectional self-attention, while the large model consists of 24 Encoders and 16 bidirectional heads. BERT model is pre-trained on 800 million words from BooksCorpus and English Wikipedia's unlabeled text of 2.5 billion words.…”
Section: Introductionmentioning
confidence: 99%
“…Some output explanations can be seen in table 7. Detailed transcripts from GPT-3 can be found here 4 . Generally, GPT-3 provided poor explanations.…”
Section: Resultsmentioning
confidence: 99%
“…Although these explanations provide insights about the decisions behind the model, they fail to provide enough semantic meaning with respect to the creative process. In [4], an attempt is made to provide deeper explanations in the task of sentiment classification. This also involved an analysis of the attention weights; however, instead of focusing the explanations on this feature, they provided a summary of the most relevant sentences (according to the weights) as the explanation.…”
Section: Transformer-based Approaches In Explainable Aimentioning
confidence: 99%
See 1 more Smart Citation