2022
DOI: 10.3233/ip-211529
|View full text |Cite
|
Sign up to set email alerts
|

Emotional AI: Legal and ethical challenges1

Abstract: The European Commission has presented a draft for an Artificial Intelligence Act (AIA). This article deals with legal and ethical questions of the datafication of human emotions. In particular, it raises the question of how emotions are to be legally classified. In particular, the concept of “emotion recognition systems” in the sense of the draft Artificial Intelligence Act (AIA) published by the European Commission is addressed. As it turns out, the fundamental right to freedom of thought as well as the quest… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 10 publications
(6 citation statements)
references
References 9 publications
0
3
0
Order By: Relevance
“…For example, pulled up eyebrow is one feature that can exist in anger, disgust, and fear faces. These challenges pose a great limitations on the model predictive capability [8], [11]. Some works in the literature [3], [7], [23], solved this problem by minimizing the number of classes or combining these classes under one class.…”
Section: B Model Training and Testingmentioning
confidence: 99%
“…For example, pulled up eyebrow is one feature that can exist in anger, disgust, and fear faces. These challenges pose a great limitations on the model predictive capability [8], [11]. Some works in the literature [3], [7], [23], solved this problem by minimizing the number of classes or combining these classes under one class.…”
Section: B Model Training and Testingmentioning
confidence: 99%
“…En la literatura anglosajona, dichos sistemas se están etiquetando como Emotional AI, emotion AI, emotion recognition, artificial emotional intelligence, 13 y hay algunos intentos definitorios. 14 Al respecto, y por 9 Sobre la experimentación de estos sistemas en áreas distintas de la migración, Gremsl y Hödl (2022). Al respecto, Bard (2021, p. 5): "This article contends that for all the reasons that there is an urgent need to regulate Al in general, the growing influence of Emotion Al presents issues of even more concern".…”
Section: El Incipiente Avance De La Inteligencia Artificial Emocionalunclassified
“…27 Respecto de los riesgos en el terreno de los derechos fundamentales, antes que la vulneración de cualquier otro derecho, lo que queda cuestionado es el propio valor, principio y derecho a la dignidad humana. Un sistema como el propuesto, en efecto, que detectaría las mentiras y clasificaría al sujeto asignándole un nivel de riesgo, entraña una degradación del ser humano a un mero objeto (Gremsl y Hödl, 2022), en la medida en que le sustrae su autonomía y autodeterminación, es decir, su capacidad de incidir en la construcción de su voluntad -como la de cruzar una frontera por ciertas razones-y encomienda la determinación de dicha voluntad a un sistema (el ADDS) responsable de cuantificar la probabilidad de que el individuo mienta, sobre la base de sus microexpresiones faciales y otros datos. Limitándonos al terreno de la UE, podría traerse a colación la jurisprudencia del Tribunal de Justicia de la Unión Europea para defender la ilegalidad de un sistema como el de iBorderCtrl, puesto que entraña la reificación del ser humano; 28 y no parece baladí recordar que las recientes orientaciones del Grupo de expertos de alto nivel sobre inteligencia artificial (2019, p. 13) afirman que, en el ámbito de los sistemas de IA, "el respeto a la dignidad humana implica que todas las personas" no pueden ser tratadas "como simples objetos que se pueden filtrar [y] ordenar […]".…”
Section: El Proyecto Iborderctrl: Rasgos Generalesunclassified
“…In light of the advancements achieved in the field of EAI over the recent years, along with the vast potential applications of emotionallycapable AI systems and the promising opportunities they offer, a number of studies have emerged offering the ethical perspectives on EAI (McStay, 2018(McStay, , 2020bGreene, 2020;Gremsl and Hödl, 2022;Ghotbi, 2023;Gossett, 2023). These investigations have highlighted the potential benefits of emotionally-capable AI systems, while also drawing attention to the associated risks, with key concerns including issues related to privacy, the potential for manipulation, and the threat of exacerbating socio-economic disparities.…”
mentioning
confidence: 99%
“…I will explore and ask whether it would be advisable from an ethical perspective to equip these AI-systems with emotional capacities. Despite the existence of a significant corpus of research that provides ethical perspectives on AI-DSS in general or their use in specific contexts (Braun et al, 2020;Lara and Deckers, 2020;Stefan and Carutasu, 2020;Cartolovni et al, 2022;Nikola et al, 2022), alongside a comprehensive body of literature addressing the ethics of EAI (McStay, 2018(McStay, , 2020bGreene, 2020;Gremsl and Hödl, 2022;Ghotbi, 2023;Gossett, 2023), so far, there has been no research that intersects these two domains. Specifically, there's a lack of investigation into the ethics of emotionally capable AI-DSS.…”
mentioning
confidence: 99%