2019
DOI: 10.1007/978-3-030-27005-6_2
| View full text |Cite
|
Sign up to set email alerts
|

Abstract: In the light of ongoing progresses of research on artificial intelligent systems exhibiting a steadily increasing problem-solving ability, the identification of practicable solutions to the value alignment problem in AGI Safety is becoming a matter of urgency. In this context, one preeminent challenge that has been addressed by multiple researchers is the adequate formulation of utility functions or equivalents reliably capturing human ethical conceptions. However, the specification of suitable utility functio… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
12
0

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 8 publications
(12 citation statements)
references
References 35 publications
(50 reference statements)
0
12
0
Order By: Relevance
“…Thereby, for safety reasons, the utility functions can and should include context-sensitive and perceiver-dependent elements as integrated e.g. in augmented utilitarianism [13]. Fourth, updates of law are solely reflected in the ethical goal functions which leads to a more flexible and controllable task.…”
Section: Disentanglement Of Responsibilitiesmentioning
confidence: 99%
See 4 more Smart Citations
“…Thereby, for safety reasons, the utility functions can and should include context-sensitive and perceiver-dependent elements as integrated e.g. in augmented utilitarianism [13]. Fourth, updates of law are solely reflected in the ethical goal functions which leads to a more flexible and controllable task.…”
Section: Disentanglement Of Responsibilitiesmentioning
confidence: 99%
“…Fifth, a blockchain approach to ensure the security and transparency of the goal functions themselves and all updates on these functions might be recommendable. Crucially, in order to avoid formulations of an ethical goal function with safety-critical side effects for human entities (including implications related to impossibility theorems for consequentialist frameworks [152]), it is recommendable to assign a type of perceiverdependent and context-sensitive utility to simulations of situations instead of only to the future outcome of actions [14,13]. In the long-term, we believe that scientific research with the goal to integrate the first-person perspective of society on perceived well-being within an ethical goal function at the core of the presented socio-technological feedbackloop might represent one substantial element needed to promote human flourishing in the most efficient possible way aided by the problem solving ability of AI.…”
Section: Conclusion and Future Prospectsmentioning
confidence: 99%
See 3 more Smart Citations