2021
DOI: 10.1007/s43681-020-00037-w
|View full text |Cite
|
Sign up to set email alerts
|

Bridging the gap: the case for an ‘Incompletely Theorized Agreement’ on AI policy

Abstract: Recent progress in artificial intelligence (AI) raises a wide array of ethical and societal concerns. Accordingly, an appropriate policy approach is urgently needed. While there has been a wave of scholarship in this field, the research community at times appears divided amongst those who emphasize ‘near-term’ concerns and those focusing on ‘long-term’ concerns and corresponding policy measures. In this paper, we seek to examine this alleged ‘gap’, with a view to understanding the practical space for inter-com… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
10
0

Year Published

2021
2021
2023
2023

Publication Types

Select...
6
1

Relationship

1
6

Authors

Journals

citations
Cited by 17 publications
(11 citation statements)
references
References 72 publications
0
10
0
Order By: Relevance
“…Other recent work has covered the topic from various angles, such as the role of international standards [1], national AI strategies [26] or ethics guidelines [21,27]. Valuable contributions have also been made theorizing about the governance design of international agreements [16,28].…”
Section: Literature Reviewmentioning
confidence: 99%
“…Other recent work has covered the topic from various angles, such as the role of international standards [1], national AI strategies [26] or ethics guidelines [21,27]. Valuable contributions have also been made theorizing about the governance design of international agreements [16,28].…”
Section: Literature Reviewmentioning
confidence: 99%
“…If implemented, governments could then focus on regulating the regulators instead of the entire AI industry, and AI regulation would benefit from the speed of private innovation. Agile regulations might seem objectionable from ethical perspectives that assert regulations should perfectly reflect normative moral principles, but society will make faster progress towards Ethical AI if we have imperfect AI public policies in place than if we have no Ethical AI policy incentives or regulations at all (Stix and Maas, 2021). Successful Ethical AI policy will not only make it more likely that AI product teams create AIs that align with our collective moral principles, it will help society trust the AIs that can truly make our lives better.…”
Section: Investment Area 4: Agile Public Policy and Regulationmentioning
confidence: 99%
“…However, even experts struggle to agree [ 24 , 38 ] and given the ubiquitous nature of AI it is difficult, if not impossible, to forecast now what AI governance frameworks might be needed in the near or far future. In addition, existing institutions, their policy texts and legal frameworks are historically often drawn from in moments of unexpected change [ 50 ], because it can be quicker to react with what exists than develop something new.…”
Section: Motivation Urgency and Limitationsmentioning
confidence: 99%
“…Different institutional set-ups will yield different path dependencies. Those that do get put into motion over the coming years are likely to be especially critical (Stix and Maas 2021) because they form the lens through which governments will be able to interact with progressing AI technologies and enact appropriate AI governance measures. This wouldn't merit outsized concern, if it were possible to clearly forecast AI progress across various sectors and if there was common expert consensus about the development path of AI over the coming decades.…”
mentioning
confidence: 99%