2020
DOI: 10.1007/s11948-020-00244-y
|View full text |Cite
|
Sign up to set email alerts
|

Computational Goals, Values and Decision-Making

Abstract: Considering the popular framing of an artificial intelligence as a rational agent that always seeks to maximise its expected utility, referred to as its goal, one of the features attributed to such rational agents is that they will never select an action which will change their goal. Therefore, if such an agent is to be friendly towards humanity, one argument goes, we must understand how to specify this friendliness in terms of a utility function. Wolfhart Totschnig (Fully Autonomous AI, Science and Engineerin… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
7
0

Year Published

2020
2020
2024
2024

Publication Types

Select...
4
1
1

Relationship

0
6

Authors

Journals

citations
Cited by 6 publications
(7 citation statements)
references
References 11 publications
0
7
0
Order By: Relevance
“…This paper gathers some preliminary ideas, and aims to serve as a "call-to-arms" to the community to examine edge cases in utility-based reasoning and ensure that they do not lead to paradoxical or undesirable behaviour. Significant avenues of future work remain open, including the integration of uncertainty into such utility-based systems, and we believe that philosophical work dealing with decision theory [16], as well as work on computational ethics [17], can serve to provide additional ideas to deal with the problem highlighted in this paper.…”
Section: Discussionmentioning
confidence: 99%
“…This paper gathers some preliminary ideas, and aims to serve as a "call-to-arms" to the community to examine edge cases in utility-based reasoning and ensure that they do not lead to paradoxical or undesirable behaviour. Significant avenues of future work remain open, including the integration of uncertainty into such utility-based systems, and we believe that philosophical work dealing with decision theory [16], as well as work on computational ethics [17], can serve to provide additional ideas to deal with the problem highlighted in this paper.…”
Section: Discussionmentioning
confidence: 99%
“…Equivalents to utility functions 11 used in AI systems include value functions, objective functions, loss functions, reward functions (especially in Reinforcement Learning), and preference orderings (Eckersley, 2019). The concepts of utility function and goal are often used interchangeably in the AI literature (Dennis, 2020). Where loss functions (gradients) are used, which is the case when some objective function is minimized (for e.g.…”
Section: Examples In the Field Of Aimentioning
confidence: 99%
“…Bounded rationality is not only applicable to humans, but also to AI agents -even though they may have vastly better computational abilities (Dennis, 2020). As Wagner (2020, p.114) point out "whilst the new species of 'machina economicus' [...] behaves more economic than man, it too is faced with bounded rationality.…”
Section: Bounded Rationalitymentioning
confidence: 99%
See 2 more Smart Citations