2023
DOI: 10.2139/ssrn.4377481
|View full text |Cite
|
Sign up to set email alerts
|

Institutionalised Distrust and Human Oversight of Artificial Intelligence: Toward a Democratic Design of AI Governance under the European Union AI Act

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1
1

Relationship

0
5

Authors

Journals

citations
Cited by 5 publications
(4 citation statements)
references
References 26 publications
0
3
0
Order By: Relevance
“…Despite the promises of AI-centered solutions for automation and efficiency, the irreplaceable human element, characterized by subtlety and subjective judgment, is essential in verifying information accuracy and contextual relevance (Ninaus & Sailer, 2022). This human oversight is crucial for in-depth analysis and as a means to improve and fine-tune AI systems through feedback, ensuring their continuous improvement (Laux, 2023). The theoretical framework of our study serves as a foundation for future research, emphasizing the multidimensional approach to content verification, combining technology, education, and human discernment.…”
Section: What Are Th E Salient Takeaways Of Th Is Study?mentioning
confidence: 99%
“…Despite the promises of AI-centered solutions for automation and efficiency, the irreplaceable human element, characterized by subtlety and subjective judgment, is essential in verifying information accuracy and contextual relevance (Ninaus & Sailer, 2022). This human oversight is crucial for in-depth analysis and as a means to improve and fine-tune AI systems through feedback, ensuring their continuous improvement (Laux, 2023). The theoretical framework of our study serves as a foundation for future research, emphasizing the multidimensional approach to content verification, combining technology, education, and human discernment.…”
Section: What Are Th E Salient Takeaways Of Th Is Study?mentioning
confidence: 99%
“…There are emerging risk management frameworks, mandatory laws, and standards including the European Union's AI Act (EU AI Act) 8 , the National Institute of Standards and Technology's AI Risk Management Framework (NIST AI RMF) 9 , etc., been discussed for different purposes such as AI risk management, information security management [16], etc. A review by Barraza de la Paz et al [17] presents an overview of emerging versions of the NIST Cyber Security Framework (CSF), ISO/IEC 27001:2022, and MAGERIT frameworks.…”
Section: Background and Related Workmentioning
confidence: 99%
“…14 does not provide much information on what will make human oversight effective and meaningful. Note that comprehensive human oversight involves the supervision of the AI system during the entire lifecycle; it should also ensure mandatory transparency measures-none of which are mentioned in the AIA, leading to a considerable lack of clarity and certainty (Laux, 2023). 31 In addition, the AIA fails to clarify the notion of algorithmic bias and fairness.…”
Section: Other Points Of Criticismmentioning
confidence: 99%
“…The previous sections have analyzed, in particular, the AIA's 31 The key problems with human overseers include over-reliance (i.e., automation bias) or under-reliance (i.e., algorithm aversion) on algorithmic systems (i.e., humans often misjudge the accuracy of algorithmic predictions), lack of competence, expertise, and training (i.e., humans lacking the required skills to oversee AI systems effectively), and false incentives (e.g., financial or commercial self-interests and possible capture by industry interests). To address these challenges, the following design principles might help: 1. justification and legitimacy (i.e., proof of competence, training, and authority), 2. periodical mandates (i.e., a rotation system for auditors fosters impartiality and shields them from being captured by AI developers interests), 3. collective decisions (i.e., having a team of diverse overseers improves decision-making), 4. limited competence of institutions (i.e., checks and balances and separation of powers; e.g., second-degree overseers provide checks on first-degree overseers), 5. justiciability and accountability (i.e., establishing appeals procedures and liability claims against human overseers); and 6. transparency (i.e., explainable AI and algorithmic transparency [e.g., the design and operation of human oversight practices should be made public], and procedure and performance transparency) (Laux, 2023).…”
Section: Key Takeawaysmentioning
confidence: 99%