2017
DOI: 10.1007/s10506-017-9207-8
|View full text |Cite
|
Sign up to set email alerts
|

On the legal responsibility of autonomous machines

Abstract: The paper concerns the problem of the legal responsibility of autonomous machines. In our opinion it boils down to the question of whether such machines can be seen as real agents through the prism of folk-psychology. We argue that autonomous machines cannot be granted the status of legal agents. Although this is quite possible from purely technical point of view, since the law is a conventional tool of regulating social interactions and as such can accommodate various legislative constructs, including legal r… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0
3

Year Published

2018
2018
2023
2023

Publication Types

Select...
8
2

Relationship

0
10

Authors

Journals

citations
Cited by 46 publications
(20 citation statements)
references
References 24 publications
(20 reference statements)
0
12
0
3
Order By: Relevance
“…In support of this thesis we cite some authors that describe autonomous machine as "reactive (it responds in a timely fashion to changes in the environment), self-controlling (i.e. it exercises control over its own actions and is not directly controlled by any other agent), goal-oriented (it does not simply act in response to the environment), and temporally continuous (it is a continuously running process)" (Brożek, 2017). Thus, in theory it implies that degree of a person's culpability when artificial intelligence system is a tool of a crime may be affected by cognitive and volitional characteristics of AI.…”
Section: Culpability In Criminal Lawmentioning
confidence: 85%
“…In support of this thesis we cite some authors that describe autonomous machine as "reactive (it responds in a timely fashion to changes in the environment), self-controlling (i.e. it exercises control over its own actions and is not directly controlled by any other agent), goal-oriented (it does not simply act in response to the environment), and temporally continuous (it is a continuously running process)" (Brożek, 2017). Thus, in theory it implies that degree of a person's culpability when artificial intelligence system is a tool of a crime may be affected by cognitive and volitional characteristics of AI.…”
Section: Culpability In Criminal Lawmentioning
confidence: 85%
“…However, certain commentators have argued that, even if it were possible to do so, legal rights should not be given to robots [35,63]. Brożek and Jakubiec have argued that legal responsibility should not be introduced to autonomous machines [64]. Authors have observed that any such law could only be "law in books" and could not be used in real life (i.e., could not be "law in action").…”
Section: A Ban On Public Violence Against Robotsmentioning
confidence: 99%
“…73 Although such agents are "capable of multiple and autonomous intervention [sic] in the legal world," 74 such technology is very different from the androids we see in much science fiction 75 and, it is argued, means that current robots and AI should not be seen as legal agents. 76 The point is made by Bartosz Brozek and Marek Jacubiec, who introduce a character know as Mr Y. Mr Y is an android programmed to replicate human behavior, and the authors conclude he is undeserving of moral concern due to the fact he is nothing more than any other technology. 77 They argue that the mere replication of human behavior lacks an element of "folk-psychology"-an internalization by a population that the autonomous machine is one that is deserving of recognition as legally responsible based on characteristics the society deems relevant regarding beliefs, goals, intentions, and desires.…”
Section: Applying the Gewirthian Frameworkmentioning
confidence: 99%