2021
DOI: 10.1007/s00146-021-01330-w
|View full text |Cite
|
Sign up to set email alerts
|

Caring in the in-between: a proposal to introduce responsible AI and robotics to healthcare

Abstract: In the scenario of growing polarization of promises and dangers that surround artificial intelligence (AI), how to introduce responsible AI and robotics in healthcare? In this paper, we develop an ethical–political approach to introduce democratic mechanisms to technological development, what we call “Caring in the In-Between”. Focusing on the multiple possibilities for action that emerge in the realm of uncertainty, we propose an ethical and responsible framework focused on care actions in between fears and h… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

0
5
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
6
1

Relationship

0
7

Authors

Journals

citations
Cited by 18 publications
(9 citation statements)
references
References 59 publications
(29 reference statements)
0
5
0
Order By: Relevance
“…The purpose of such machines is to aid sufferers maintain their feelings of autonomy and worth. Robotics are additionally being created to aid people with private duties such as accessing the bathroom and grooming .The fear that robotics treatment could be innately inferior to carers since current robotics are unable to demonstrate true caring and empathy, either the machine performs physiologically helpful activities or familial/emotionally helpful activities for an individual [7].…”
Section: Literature Reviewmentioning
confidence: 99%
“…The purpose of such machines is to aid sufferers maintain their feelings of autonomy and worth. Robotics are additionally being created to aid people with private duties such as accessing the bathroom and grooming .The fear that robotics treatment could be innately inferior to carers since current robotics are unable to demonstrate true caring and empathy, either the machine performs physiologically helpful activities or familial/emotionally helpful activities for an individual [7].…”
Section: Literature Reviewmentioning
confidence: 99%
“…From a holistic view of the human person, the greater danger seems to be that human beings believe this and then treat each other as if they were reducible to such data and statistical analyses, profiles, and predictions drawn from them-this is bracketing out the fact that treating human beings in such a way can be both extremely effective and dehumanizing at the same time. Thus, an ethical assessment of the use of DL, for example, in profiling and predicting behavior-which already finds practical application, e.g., in law, insurance, loan giving, and health care (see, e.g., [182,331,[387][388][389][390][391][392][393])-would focus on the insight that such predictions and profiling can never do justice to human beings, their dignity, and freedom as persons and citizens of our societies. This would be an anthropological analysis, backing the ethical objection to the abusive instrumentalization of DL, rather than just an ethical objection that such abuse should not happen.…”
Section: Human Beings As (Morally) Responsible Agentsmentioning
confidence: 99%
“…AI systems are not ethically neutral but, more and more, we are all dependent on AI for our decisions (Fry 2018 ). In the information society, AI is at the core of high risk services, such as healthcare (Watson et al 2019 ; Zetterholm et al 2021 ; Vallès-Peris and Domènech 2021 ), financial services (Kostka 2019 ; Townson 2020 ; Lee and Floridi 2020 ; Aggarwal 2020 ; Anshari et al 2021 ) justice and security (Poitras 2014 ; Hauge et al 2016 ; Merler et al 2019 ; Green et al 2019 ) and even the military (de Vynck 2021 ). AI is also an integral part of marketing, predicting users’ interests through big data that contain each person’s personal digital profile, in what has been called “surveillance capitalism” (Zuboff 2019 ).…”
Section: Introductionmentioning
confidence: 99%
“…It is widely documented that Artificial Intelligence (AI) reproduces and often amplifies biases against historically disempowered groups (Bolukbasi et al 2016;Garga et al 2018;Manzini et al 2019;Nadeem et al 2020). This constitutes a risk for the exacerbation of those biases offline and the eventual increase in discrimination (Vinuesa et al 2020). AI systems are not ethically neutral but, more and more, we are all dependent on AI for our decisions (Fry 2018).…”
Section: Introductionmentioning
confidence: 99%
See 1 more Smart Citation