2013 46th Hawaii International Conference on System Sciences 2013
DOI: 10.1109/hicss.2013.149
|View full text |Cite
|
Sign up to set email alerts
|

Corporate and Artificial Moral Agency

Abstract: The paper considers the implications of the Corporate Moral Agency debate for the notion of artificial moral agency and the general intelligence project. A distinction is drawn between metaarguments and object-level arguments, whilst the implications of the arguments within each category are indicated. The "metaphor" "mutuality" and "political" arguments are then discussed further.

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1

Citation Types

0
2
0

Year Published

2013
2013
2020
2020

Publication Types

Select...
2
1

Relationship

1
2

Authors

Journals

citations
Cited by 3 publications
(2 citation statements)
references
References 42 publications
0
2
0
Order By: Relevance
“…Finally, whatever arguments are wielded in the above discussions, the normative approach to AMA opens up a "demarcation problem", akin to the one well-known from debates on the moral status of non-human animals and other natural entities (see Samuelsson 2010;Singer 2011;Warren 1997). The problem regarding AMA is that any normative criterion we formulate to exclude artificial entities from practices and interactions assumed to imply moral agency, may also exclude some humans which should not be so excluded.…”
Section: The Ama Debate: a Diagnosis And A Remedymentioning
confidence: 99%
See 1 more Smart Citation
“…Finally, whatever arguments are wielded in the above discussions, the normative approach to AMA opens up a "demarcation problem", akin to the one well-known from debates on the moral status of non-human animals and other natural entities (see Samuelsson 2010;Singer 2011;Warren 1997). The problem regarding AMA is that any normative criterion we formulate to exclude artificial entities from practices and interactions assumed to imply moral agency, may also exclude some humans which should not be so excluded.…”
Section: The Ama Debate: a Diagnosis And A Remedymentioning
confidence: 99%
“…Several authors have suggested that responsibility gaps can be handled by distributing responsibility for the acts of an AMA across all those human moral agents involved who are also capable of moral responsibility, like designers, users, investors and other contributors (Adams 2001;Champagne and Tonkens 2013;Singer 2013). Champagne and Tonkens claim that this solution would depend on a human moral agent agreeing to take on this responsibility; an idea developed further is the notion of such voluntary undertaken responsibility as continuously negotiable between the involved human parties (Lokhorst and van den Hoven 2012; Champagne and Tonkens 2013; Noorman 2014; Schulzke 2013).…”
mentioning
confidence: 99%