Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics 2019
DOI: 10.18653/v1/p19-1243
|View full text |Cite
|
Sign up to set email alerts
|

Entity-Centric Contextual Affective Analysis

Abstract: While contextualized word representations have improved state-of-the-art benchmarks in many NLP tasks, their potential usefulness for social-oriented tasks remains largely unexplored. We show how contextualized word embeddings can be used to capture affect dimensions in portrayals of people. We evaluate our methodology quantitatively, on held-out affect lexicons, and qualitatively, through case examples. We find that contextualized word representations do encode meaningful affect information, but they are heav… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

1
15
0
1

Year Published

2020
2020
2023
2023

Publication Types

Select...
6
3

Relationship

2
7

Authors

Journals

citations
Cited by 18 publications
(17 citation statements)
references
References 24 publications
1
15
0
1
Order By: Relevance
“…The pioneering work of (Bolukbasi et al, 2016) demonstrated that word embeddings (even when trained on formal corpora) exhibit gender stereotypes to a disturbing extent. On top of that, several studies have been proposed to measure and mitigate bias in word embeddings (Chaloner and Maldonado, 2019;Zhou et al, 2019; and more recently on pre-trained contextualized embeddings models (Kurita et al, 2019;May et al, 2019;Field and Tsvetkov, 2019;Sheng et al, 2019;Nangia et al, 2020;Vig et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…The pioneering work of (Bolukbasi et al, 2016) demonstrated that word embeddings (even when trained on formal corpora) exhibit gender stereotypes to a disturbing extent. On top of that, several studies have been proposed to measure and mitigate bias in word embeddings (Chaloner and Maldonado, 2019;Zhou et al, 2019; and more recently on pre-trained contextualized embeddings models (Kurita et al, 2019;May et al, 2019;Field and Tsvetkov, 2019;Sheng et al, 2019;Nangia et al, 2020;Vig et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…We also study the correlation between entity mentions and moral foundation usage by different groups, which helps pave the way to analyze partisan sentiment towards entities using MFT. In that sense, our work is broadly related to entity-centric affective analysis (Deng and Wiebe, 2015;Field and Tsvetkov, 2019;Park et al, 2020).…”
Section: Related Workmentioning
confidence: 99%
“…To the best of our knowledge, this is the first work that presents a multi-pronged investigation of brands and subjective knowledge like affect attributes represented in contextual representation. Field and Tsvetkov (2019) is the most relevant prior work in terms of affect analysis. They present an entity-centric affective analysis with the use of contextual representations, where they find that meaningful affect information is captured in contextualize word representations but these representations are heavily biased towards their training data.…”
Section: Related Workmentioning
confidence: 99%