2021
DOI: 10.48550/arxiv.2103.11436
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Responsible AI: Gender bias assessment in emotion recognition

Abstract: Rapid development of artificial intelligence (AI) systems amplify many concerns in society. These AI algorithms inherit different biases from humans due to mysterious operational flow and because of that it is becoming adverse in usage. As a result, researchers have started to address the issue by investigating deeper in the direction towards Responsible and Explainable AI. Among variety of applications of AI, facial expression recognition might not be the most important one, yet is considered as a valuable pa… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
2

Citation Types

0
10
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
2

Relationship

2
6

Authors

Journals

citations
Cited by 9 publications
(11 citation statements)
references
References 84 publications
0
10
0
Order By: Relevance
“…With the advancement of AI, many are concerned there is a high chance that AI development can lead to significant and irreversible repercussions since there are substantial developments in affective computing, considerable amounts of work concerning facial expression and deep learning. In [19], the authors provided an overview of FER biases in the context of responsible AI. They focused on gender bias and stated that race is another bias that needs to be addressed.…”
Section: Related Workmentioning
confidence: 99%
“…With the advancement of AI, many are concerned there is a high chance that AI development can lead to significant and irreversible repercussions since there are substantial developments in affective computing, considerable amounts of work concerning facial expression and deep learning. In [19], the authors provided an overview of FER biases in the context of responsible AI. They focused on gender bias and stated that race is another bias that needs to be addressed.…”
Section: Related Workmentioning
confidence: 99%
“…These subdomains can include different aspects of possible bias sources, be it gender, race, age, and others. For example, in [2], authors show which models are gender-biased, which are not, and how the gender of the subject affects its emotion recognition. They also describe the extent of this bias by measuring the accuracy gap in emotion recognition between male and female test sets and observing which types of emotions are better classified for men and women.…”
Section: Related Workmentioning
confidence: 99%
“…In recent times, research in artificial intelligence (AI) and machine learning (ML) techniques have led to significant improvements in computer vision, speech processing, and language technologies, among others. Consequently, with these advances has come an inadvertent focus on the ethics of such ML models [1][2][3].…”
Section: Introductionmentioning
confidence: 99%
“…Social biases in downstream tasks expose users with multiple disadvantaged sensitive attributes to unknown but potentially harmful outcomes, especially when models trained on downstream tasks are used in real-world decision making, such as for screening résumes or predicting recidivism in criminal proceedings (Bolukbasi et al, 2016;Angwin et al, 1999). In this work, we choose emotion regression as a downstream task because social biases are often realized through emotion recognition (Elfenbein and Ambady, 2002) and machine learning models have been shown to reflect gender bias in emotion recognition tasks (Domnich and Anbarjafari, 2021). For example, sentiment analysis and emotion regression may be used by companies to measure product engagement for different social groups.…”
Section: Introductionmentioning
confidence: 99%