Artificial Intelligence Safety and Security 2018
DOI: 10.1201/9781351251389-3
|View full text |Cite
|
Sign up to set email alerts
|

The Basic AI Drives

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

1
69
0
4

Year Published

2019
2019
2022
2022

Publication Types

Select...
4
3
2

Relationship

0
9

Authors

Journals

citations
Cited by 61 publications
(74 citation statements)
references
References 9 publications
1
69
0
4
Order By: Relevance
“…For example, many reinforcement learning algorithms can learn by creating their own subgoals [167,184]. In other words, they are not simply executing subgoals determined by their programmers, but rather discover them independently [139,167]. In 2013, the researchers at DeepMind Technologies developed an AI, based on a reinforcement learning algorithm, which learnt to play several Atari video games and even outcompeted the best human players in some of the games [128,128,167].…”
Section: The "They Only Do What They Have Been Programmed To Do" Fallacymentioning
confidence: 99%
See 1 more Smart Citation
“…For example, many reinforcement learning algorithms can learn by creating their own subgoals [167,184]. In other words, they are not simply executing subgoals determined by their programmers, but rather discover them independently [139,167]. In 2013, the researchers at DeepMind Technologies developed an AI, based on a reinforcement learning algorithm, which learnt to play several Atari video games and even outcompeted the best human players in some of the games [128,128,167].…”
Section: The "They Only Do What They Have Been Programmed To Do" Fallacymentioning
confidence: 99%
“…Whether we count a person morally accountable or not affects the way we expect to be treated by then and how we treat them in return; for the judicial institution to function, we must assess legal accountability, which in turn is partially grounded on moral accountability. We seem to have an inclination to regard AI as a moral agent [7,177], although people do not consider them as appropriate agents for moral decisions (see [115,139]). Usually, when encountering intelligent and goal-directed behavior, the human mind attributes the target with core features of agency, that is, the presence of a self and consciousness [181].…”
Section: Artificial Intelligence: Morally Relevant Even If Non-consciousmentioning
confidence: 99%
“…Gandhi knows that if he wants to kill people, he will probably kill people, and the current version of Gandhi does not want to kill. More generally, it seems likely that most self-modifying minds will naturally have stable utility functions, which implies that an initial choice of mind design can have lasting effects (Omohundro 2008).…”
Section: Superintelligencementioning
confidence: 99%
“…We thus turn to the scenario that artificially intelligent entities develop separately from regular (or unenhanced) humans. One of the likely characteristics of any sufficiently intelligent entityno matter what final objectives are programmed into it by evolution or by its creator -is that it will act by pursuing intermediate objectives or "basic drives" that are instrumental for any final objective (Omohundro, 2008). These intermediate objectives include self-preservation, selfimprovement and resource accumulation, which all make it likelier and easier for the entity to achieve its final objectives.…”
Section: Second Scenario: Artificially Intelligent Agents and The Return Of Malthusmentioning
confidence: 99%