2021
DOI: 10.3390/philosophies6010006
|View full text |Cite
|
Sign up to set email alerts
|

Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions

Abstract: In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
6
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
3
2
1

Relationship

1
5

Authors

Journals

citations
Cited by 11 publications
(6 citation statements)
references
References 141 publications
(211 reference statements)
0
6
0
Order By: Relevance
“…Fourth, various stakeholders have worked to help create solutions for victims, primarily focusing on triaging after victimization has occurred. It will be necessary for these helpseeking resources (whether helplines, online forms, or chatbots [25,50]) to be aware of this new type of image-based sexual abuse and up-to-date with the types of redress that victims may seek [4]. Fifth, where tools do have a payment or sign up flow, users could be required as part of the flow to ingest and acknowledge information about the harms and potential consequences of NSII creation and As practitioners and designers think about future considerations for intervention, it may be helpful to break this issue down into relevant behaviors as seen in Figure 7 and consider intervention options (which should be considered complementary, as opposed to solely sufficient) at each point in the process.…”
Section: Design Considerationsmentioning
confidence: 99%
See 1 more Smart Citation
“…Fourth, various stakeholders have worked to help create solutions for victims, primarily focusing on triaging after victimization has occurred. It will be necessary for these helpseeking resources (whether helplines, online forms, or chatbots [25,50]) to be aware of this new type of image-based sexual abuse and up-to-date with the types of redress that victims may seek [4]. Fifth, where tools do have a payment or sign up flow, users could be required as part of the flow to ingest and acknowledge information about the harms and potential consequences of NSII creation and As practitioners and designers think about future considerations for intervention, it may be helpful to break this issue down into relevant behaviors as seen in Figure 7 and consider intervention options (which should be considered complementary, as opposed to solely sufficient) at each point in the process.…”
Section: Design Considerationsmentioning
confidence: 99%
“…While deepfakes can be used in beneficial ways for accessibility and creativity [19,26], abuse potential has increased in recent years as the technology has advanced in sophistication and availability [12,34,53,80]. Deepfakes can be weaponized and used for malicious purposes, including financial fraud, disinformation dissemination, cyberbullying, and sexual extortion ("sextortion") [4,26].…”
Section: Introductionmentioning
confidence: 99%
“…Indeed, an often overlooked modality is deepfake text [14] -which could be of relevance in multiple SEA AI attack schemes. In practice, while deepfake technology has already been abused for impersonation and cybercrime [15], [16], sextortion and non-consensual voyeurism [17], [18], disinformation and espionage [19], [20], deepfakes in VR [21] may add depth to existing threat vectors next to offering a novel field of affordances for malicious actors -from synthetic non-consensual VR deepfakes [22], [23] to immersive disinformation schemes [24] that could even be extended to educational or scientific settings [25]. Overall, at first sight, it seems that epistemic security considerations caution us against underestimating present-day AI and VR when it comes to answering the following question: do the use and exploit of specific AI and VR technologies risk to harm our own processes of knowledge creation and reasoning?…”
Section: Motivationmentioning
confidence: 99%
“…"epistemic anarchy" [54], "postepistemic world" [4] and "post-truth era" [55]. Furthermore, the deepfake threat landscape engendered a mechanism called automated disconcertion [17] -the epistemic confusion that arises merely by the possibility of malicious deepfakes. In the light of the aforesaid, it is easily conceivable that adversaries could exploit the contemporary fragile epistemic ecosystem and instrumentalize automated disconcertion.…”
Section: B Deepfakes For Cyborgnetic Creativity Augmentationmentioning
confidence: 99%
See 1 more Smart Citation