2022
DOI: 10.1145/3512925
|View full text |Cite
|
Sign up to set email alerts
|

Attitudes and Folk Theories of Data Subjects on Transparency and Accuracy in Emotion Recognition

Abstract: The growth of technologies promising to infer emotions raises political and ethical concerns, including concerns regarding their accuracy and transparency. A marginalized perspective in these conversations is that of data subjects potentially affected by emotion recognition. Taking social media as one emotion recognition deployment context, we conducted interviews with data subjects (i.e., social media users) to investigate their notions about accuracy and transparency in emotion recognition and interrogate st… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
11
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1
1

Relationship

1
6

Authors

Journals

citations
Cited by 26 publications
(11 citation statements)
references
References 84 publications
0
11
0
Order By: Relevance
“…To ensure fair treatment of workers, emotion AI technologies should be used fairly and ethically [143]. Fair and ethical use of emotion AI may include commitments by actors deploying emotion AI systems that it is meaningfully consented to [23]; that its (potentially biased, unreliable, and inaccurate [14,38,110,120]) information is transparent and contestable [31,66,143]; and that its use does not widen power asymmetries, such as those already present between workers and their employers [4,31]. Yet, so far, the use of emotion AI in workplaces remains largely unconstrained and unregulated, and in the modern US workplace, the growing adoption of emotion AI-enabled workplace surveillance is predicted to become the new norm [157].…”
Section: Adverse Consequences Of Emotion Aimentioning
confidence: 99%
“…To ensure fair treatment of workers, emotion AI technologies should be used fairly and ethically [143]. Fair and ethical use of emotion AI may include commitments by actors deploying emotion AI systems that it is meaningfully consented to [23]; that its (potentially biased, unreliable, and inaccurate [14,38,110,120]) information is transparent and contestable [31,66,143]; and that its use does not widen power asymmetries, such as those already present between workers and their employers [4,31]. Yet, so far, the use of emotion AI in workplaces remains largely unconstrained and unregulated, and in the modern US workplace, the growing adoption of emotion AI-enabled workplace surveillance is predicted to become the new norm [157].…”
Section: Adverse Consequences Of Emotion Aimentioning
confidence: 99%
“…For example, emotion recognition algorithms are highly contested as emotions are situated, personal, relational, and complex and thereby cannot be simply measured as neatly observable categories (Grill and Andalibi 2022;Boehner et al 2007), but some companies and researchers still claim that through algorithmic classification of facial expressions, emotions can be automatically and universally recognized (Stark 2019a). Since folk theories suggest that facial expressions in images often correspond to assumed universal emotions, some advocates of the technology were able to convince others of referring to facial expression classification as emotion recognition.…”
Section: Conflating Spurious or Partial Proxies With A Construct Make...mentioning
confidence: 99%
“…Other important examples include the term "ground truth" which implies that usually human-labeled data represent some absolute truth about the world (Jaton 2017), "data-driven," which suggests that human judgment does not matter, or the term prediction, which insinuates that ML algorithms are able to forecast the future reliably (Chun 2021). This issue also extends to how basic research problems in the field are named, e.g., emotion recognition wrongly implies that actual emotions are recognized by algorithms (Grill and Andalibi 2022). Thus, there is a need for the careful renaming of concepts and terms to improve descriptions of the capabilities and limits of algorithms for different audiences.…”
Section: Rethinking Accuracymentioning
confidence: 99%
See 2 more Smart Citations