2020
DOI: 10.48550/arxiv.2012.02592
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions

Abstract: In the last years, AI safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2

Citation Types

0
4
0

Year Published

2020
2020
2020
2020

Publication Types

Select...
1

Relationship

1
0

Authors

Journals

citations
Cited by 1 publication
(4 citation statements)
references
References 121 publications
(167 reference statements)
0
4
0
Order By: Relevance
“…VR could thus represent a suitable awareness-raising tool for future severe AI(VR) safety risks, such as by facilitating valuable retrospective counterfactual analyses [72]. Fourth, a generic recommendation that may already be applicable nowadays is to deliberately turn the confirmation bias [73] automatically reinforced via AI-empowered social media [74] against itself [57]. For example, one could create social media spaces (subsuming future social VR) that reinforce critical thinking, life-long learning, and criticism [57], which could be deliberately fueled via artificial bots (or non-player characters in VR), steering attention towards those patterns.…”
Section: Future Workmentioning
confidence: 99%
See 3 more Smart Citations
“…VR could thus represent a suitable awareness-raising tool for future severe AI(VR) safety risks, such as by facilitating valuable retrospective counterfactual analyses [72]. Fourth, a generic recommendation that may already be applicable nowadays is to deliberately turn the confirmation bias [73] automatically reinforced via AI-empowered social media [74] against itself [57]. For example, one could create social media spaces (subsuming future social VR) that reinforce critical thinking, life-long learning, and criticism [57], which could be deliberately fueled via artificial bots (or non-player characters in VR), steering attention towards those patterns.…”
Section: Future Workmentioning
confidence: 99%
“…Fourth, a generic recommendation that may already be applicable nowadays is to deliberately turn the confirmation bias [73] automatically reinforced via AI-empowered social media [74] against itself [57]. For example, one could create social media spaces (subsuming future social VR) that reinforce critical thinking, life-long learning, and criticism [57], which could be deliberately fueled via artificial bots (or non-player characters in VR), steering attention towards those patterns. Even if immersive falsehood would often not be resolved quickly, (AI-aided) social peer pressure reinforcing critical thinking and a focus on invariant good explanations could represent a necessarily incomplete, but principled defense.…”
Section: Future Workmentioning
confidence: 99%
See 2 more Smart Citations