2022
DOI: 10.1007/s44206-022-00022-2
|View full text |Cite
|
Sign up to set email alerts
|

Continuous Auditing of Artificial Intelligence: a Conceptualization and Assessment of Tools and Frameworks

Abstract: Artificial intelligence (AI), which refers to both a research field and a set of technologies, is rapidly growing and has already spread to application areas ranging from policing to healthcare and transport. The increasing AI capabilities bring novel risks and potential harms to individuals and societies, which auditing of AI seeks to address. However, traditional periodic or cyclical auditing is challenged by the learning and adaptive nature of AI systems. Meanwhile, continuous auditing (CA) has been discuss… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
3
1
1

Citation Types

1
15
0

Year Published

2022
2022
2024
2024

Publication Types

Select...
7
1

Relationship

2
6

Authors

Journals

citations
Cited by 24 publications
(16 citation statements)
references
References 66 publications
1
15
0
Order By: Relevance
“…The introduction of AI and machine learning (ML) in auditing is still in its early stages, presenting opportunities for internal auditors to enhance audit procedures and professional skepticism (Puthukulam et al, 2021). Continuous auditing of AI systems is particularly relevant for internal audit functions, offering tools and frameworks to assess AI systems effectively (Minkkinen et al, 2022). Internal audits, executed by dedicated teams within organizations, can inform decisions on AI technology development, especially when risks outweigh benefits (Raji, 2020).…”
Section: F Ai In Internal Audit and Risk Assessmentmentioning
confidence: 99%
See 1 more Smart Citation
“…The introduction of AI and machine learning (ML) in auditing is still in its early stages, presenting opportunities for internal auditors to enhance audit procedures and professional skepticism (Puthukulam et al, 2021). Continuous auditing of AI systems is particularly relevant for internal audit functions, offering tools and frameworks to assess AI systems effectively (Minkkinen et al, 2022). Internal audits, executed by dedicated teams within organizations, can inform decisions on AI technology development, especially when risks outweigh benefits (Raji, 2020).…”
Section: F Ai In Internal Audit and Risk Assessmentmentioning
confidence: 99%
“…Furthermore, the study by Landers & Behrend (2023) raises concerns about fairness and bias in AI-based decision tools, highlighting the importance of auditing AI auditors to evaluate and address these issues. Minkkinen et al (2022) discuss the concept of continuous auditing of AI as a means to ensure accountability and mitigate risks associated with AI systems. These studies underscore the significance of auditing AI systems to uphold standards, address biases, and ensure accountability in the audit process.…”
Section: Introductionmentioning
confidence: 99%
“…For example, AI algorithms can be used in the assessment of job candidates, and markets are growing around algorithmic recruitment. Regulation requires job candidates to be treated fairly; this entails auditing hiring algorithms for biases, which has, in turn, led to a nascent algorithmic auditing industry [38], [39]. Similar questions of fairness and transparency could be found in customer service chatbots and many other examples.…”
Section: B the "Easy Problem" Of Ai Governancementioning
confidence: 99%
“…To ensure their applicability in practice, these principles must also be enforceable through governance (Cath, 2018;Minkkinen et al, 2021;Morley et al, 2020). Echoing these issues, there has been increasing emphasis on AI governance (AIG) in academia (Barn, 2020;Koniakou, 2023;Laato et al, 2022a;M€ antym€ aki et al, 2022a, b;Minkkinen et al, 2021Minkkinen et al, , 2022aPapagiannidis et al, 2023;Sepp€ al€ a et al, 2021;Zimmer et al, 2022) and industry (Deloitte, 2021;KPMG, 2021;Statista, 2021a). On the industry side, a recent report suggests that a significant majority (87%) of IT decision-makers believe in the need to regulate AI-driven technologies, with a clear focus on ethics and corporate social responsibility (KPMG, 2021).…”
Section: Introductionmentioning
confidence: 99%
“…Echoing these issues, there has been increasing emphasis on AI governance (AIG) in academia (Barn, 2020; Koniakou, 2023; Laato et al. , 2022a; Mäntymäki et al ., 2022a, b; Minkkinen et al ., 2021, 2022a, 2023; Papagiannidis et al. , 2023; Seppälä et al.…”
Section: Introductionmentioning
confidence: 99%