2021
DOI: 10.5771/2747-5182-2021-1-86
|View full text |Cite
|
Sign up to set email alerts
|

Experimental Regulations for AI: Sandboxes for Morals and Mores

Abstract: Recent EU legislative and policy initiatives aim to offer flexible, innovation-friendly, and future-proof regulatory frameworks. Key examples are the EU Coordinated Plan on AI and the recently published EU AI Regulation Proposal which refer to the importance of experimenting with regulatory sandboxes so as to balance innovation in AI against its potential risks. Originally developed in the Fintech sector, regulatory sandboxes create a test bed for a selected number of innovative projects, by waiving otherwise … Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1

Citation Types

0
3
0

Year Published

2021
2021
2024
2024

Publication Types

Select...
2
1
1

Relationship

0
4

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 21 publications
0
3
0
Order By: Relevance
“…Similar to LIAISON, other approaches have also defended the importance of harnessing experimentation as a source of information to devise better regulations (Weng et al, 2015; Shimpo, 2018; Calleja et al, 2022). Moreover, the prospect of regulatory sandboxes for particular applications has raised vivid discussions, particularly in Europe (European Parliament, 2022; Ranchordás, 2021; Truby et al, 2022). LIAISON’s model is compatible with those processes, as it also aims at developing models to align robot development and regulation.…”
Section: Aligning Robot Development and Regulation: A New Modelmentioning
confidence: 99%
“…Similar to LIAISON, other approaches have also defended the importance of harnessing experimentation as a source of information to devise better regulations (Weng et al, 2015; Shimpo, 2018; Calleja et al, 2022). Moreover, the prospect of regulatory sandboxes for particular applications has raised vivid discussions, particularly in Europe (European Parliament, 2022; Ranchordás, 2021; Truby et al, 2022). LIAISON’s model is compatible with those processes, as it also aims at developing models to align robot development and regulation.…”
Section: Aligning Robot Development and Regulation: A New Modelmentioning
confidence: 99%
“…Another EU task force focused on the future of AI is the Ad-Hoc Working Group on Artificial Intelligence Cybersecurity at the EU Agency for Cybersecurity (ENISA), which recently issued its first report on AI Cybersecurity Challenges (Perrault et al, 2019). The document has developed a taxonomy of AI threats and allocated them into the following blocks: nefarious activity, eavesdropping, physical attacks, unintentional damage, failures, outages, 11.…”
Section: Researchmentioning
confidence: 99%
“…for service-level agreement disaster, and legal issues. The latter has particularly mentioned 'corruption of data indexes, profiling of end-users, vendor lock-in, weak requirements analysis, lack of data governance policies, lack of data protection compliance of third parties, SLA 11 breach' (Perrault et al, 2019).…”
Section: Researchmentioning
confidence: 99%