2016
DOI: 10.1007/s11948-016-9783-0
|View full text |Cite
|
Sign up to set email alerts
|

Can Artificial Intelligences Suffer from Mental Illness? A Philosophical Matter to Consider

Abstract: The potential for artificial intelligences and robotics in achieving the capacity of consciousness, sentience and rationality offers the prospect that these agents have minds. If so, then there may be a potential for these minds to become dysfunctional, or for artificial intelligences and robots to suffer from mental illness. The existence of artificially intelligent psychopathology can be interpreted through the philosophical perspectives of mental illness. This offers new insights into what it means to have … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
17
0

Year Published

2017
2017
2021
2021

Publication Types

Select...
5
1

Relationship

1
5

Authors

Journals

citations
Cited by 15 publications
(17 citation statements)
references
References 13 publications
0
17
0
Order By: Relevance
“…Professional groups may also lobby against the use of AI and autonomous robotic surgical devices (as seen in the case of J&J's FDA approved automated sedation system; this was intended to replace anaesthesiologists). Furthermore, it may potentially be too early to discuss whether robots can have psychiatric problems, as currently that question does not apply to anything on the horizon for surgical robots.…”
Section: Resultsmentioning
confidence: 99%
“…Professional groups may also lobby against the use of AI and autonomous robotic surgical devices (as seen in the case of J&J's FDA approved automated sedation system; this was intended to replace anaesthesiologists). Furthermore, it may potentially be too early to discuss whether robots can have psychiatric problems, as currently that question does not apply to anything on the horizon for surgical robots.…”
Section: Resultsmentioning
confidence: 99%
“…But we have already noted that synthetic biology methods might be used to generate post-embryonic human cerebral and neural tissue organoids that likewise present complete and active pain pathways, possibly even in childhood or adult forms. Beyond sentience and pain, ethical concerns have been raised in both biological ( Greely et al, 2007 ) and non-biological ( Ashrafian, 2016 ; Coeckelbergh, 2010 ) contexts with the possibility of generating entities with consciousness and self-awareness, and these concerns could potentially also be triggered by very sophisticated brain organoids. Moral concerns could also arise with other types of human entities, as illustrated by our ESATE experience above.…”
Section: Interfaces With Other Ethical Issues and Ethics Processesmentioning
confidence: 99%
“…Yet AI has no interests beyond the completion of these actions, and so nothing in this process can be said to be better for the AI itself than anything else from an imagined AI point of view. We pass over the question of whether AI could exhibit something like irrationality or mental illness, though perhaps reflection on this idea could aid in understanding HI and the potential disorders of humans in general (Ashrafian 2017 ).…”
Section: Action Reason and Norms: Comparing Hi And Aimentioning
confidence: 99%