2023
DOI: 10.1017/dap.2023.8
|View full text |Cite
|
Sign up to set email alerts
|

Think about the stakeholders first! Toward an algorithmic transparency playbook for regulatory compliance

Abstract: Increasingly, laws are being proposed and passed by governments around the world to regulate artificial intelligence (AI) systems implemented into the public and private sectors. Many of these regulations address the transparency of AI systems, and related citizen-aware issues like allowing individuals to have the right to an explanation about how an AI system makes a decision that impacts them. Yet, almost all AI governance documents to date have a significant drawback: they have focused on what to do (or wha… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
4
2

Relationship

0
6

Authors

Journals

citations
Cited by 12 publications
(3 citation statements)
references
References 78 publications
0
3
0
Order By: Relevance
“…On non-high-risk systems, only limited transparency obligations are imposed, while for high-risk systems, many restrictions are imposed on quality, documentation, traceability, transparency, human oversight, accuracy, and robustness. Bell et al [46] state that transparency is left to the technologists to achieve and propose a stakeholder-first approach that assists technologists in designing transparent, regulatory-compliant systems, which is a useful initiative. Besides GDPR, there are other privacy laws for which XAI might be an interesting development.…”
Section: Legal and Regulatory Compliancementioning
confidence: 99%
“…On non-high-risk systems, only limited transparency obligations are imposed, while for high-risk systems, many restrictions are imposed on quality, documentation, traceability, transparency, human oversight, accuracy, and robustness. Bell et al [46] state that transparency is left to the technologists to achieve and propose a stakeholder-first approach that assists technologists in designing transparent, regulatory-compliant systems, which is a useful initiative. Besides GDPR, there are other privacy laws for which XAI might be an interesting development.…”
Section: Legal and Regulatory Compliancementioning
confidence: 99%
“…Current scholars are unanimous in calling for expanding the concept of SCT by disclosing non-operational information to external stakeholders rather than supply chain partners (Schnackenberg & Tomlinson, 2016). SCT shifts the focus from internal supply chain partners to external stakeholders such as consumers, governments, and non-governmental organizations (Bell et al, 2023). Sodhi and Tang (2019) claim SCT discloses information about upstream firms and products to the public, including consumers and investors.…”
Section: Supply Chain Transparencymentioning
confidence: 99%
“…Recent works on explainable ML methods for policy development (Amarasinghe et al, 2023) underline the importance of contextualizing ML explanations and highlight the limitations of existing XAI techniques. Furthermore, they highlight the importance of stakeholder’s engagement and the need to prioritize policymakers’ requirements rather than relying on technology experts to produce explanations for ML-based policies (Bell et al, 2023). Moreover, there are no agreed and proven ways for selecting models that balance performance and explainability in line with the requirements of policymakers. Regulatory compliance : ML-based systems for public policymaking must comply with emerging regulations in AI, such as the AI Act of the European Parliament and Council of Europe (European Commission, 2021).…”
Section: Introductionmentioning
confidence: 99%