Artificial Intelligence (AI) systems are increasing in significance within software services. Unfortunately, these systems are not flawless. Their faults, failures and other systemic issues have emphasized the urgency for consideration of ethical standards and practices in AI engineering. Despite the growing number of studies in AI ethics, comparatively little attention has been placed on how ethical issues can be mitigated in software engineering (SE) practice. Currently understanding is lacking regarding the provision of useful tools that can help companies transform high-level ethical guidelines for AI ethics into the actual workflow of developers. In this paper, we explore the idea of using user stories to transform abstract ethical requirements into tangible outcomes in Agile software development. We tested this idea by studying master’s level student projects (15 teams) developing web applications for a real industrial client over the course of five iterations. These projects resulted in 250+ user stories that were analyzed for the purposes of this paper. The teams were divided into two groups: half of the teams worked using the ECCOLA method for AI ethics in SE, while the other half, a control group, was used to compare the effectiveness of ECCOLA. Both teams were tasked with writing user stories to formulate customer needs into system requirements. Based on the data, we discuss the effectiveness of ECCOLA, and Primary Empirical Contributions (PECs) from formulating ethical user stories in Agile development.
Background: The continuous development of artificial intelligence (AI) and increasing rate of adoption by software startups calls for governance measures to be implemented at the design and development stages to help mitigate AI governance concerns. Most AI ethical design and development tools mainly rely on AI ethics principles as the primary governance and regulatory instrument for developing ethical AI that inform AI governance. However, AI ethics principles have been identified as insufficient for AI governance due to lack of information robustness, requiring the need for additional governance measures. Adaptive governance has been proposed to combine established governance practices with AI ethics principles for improved information and subsequent AI governance. Our study explores adaptive governance as a means to improve information robustness of AI ethical design and development tools. We combine information governance practices with AI ethics principles using ECCOLA, a tool for ethical AI software development at the early developmental stages. Aim: How can ECCOLA improve its robustness by adapting it with GARP® IG practices? Methods: We use ECCOLA as a case study and critically analyze its AI ethics principles with information governance practices of the Generally Accepted Recordkeeping principles (GARP®). Results: We found that ECCOLA’s robustness can be improved by adapting it with Information governance practices of retention and disposal. Conclusions: We propose an extension of ECCOLA by a new governance theme and card, # 21.
The public and academic discussion on Artificial Intelligence (AI) ethics is accelerating and the general public is becoming more aware AI ethics issues such as data privacy in these systems. To guide ethical development of AI systems, governmental and institutional actors, as well as companies, have drafted various guidelines for ethical AI. Though these guidelines are becoming increasingly common, they have been criticized for a lack of impact on industrial practice. There seems to be a gap between research and practice in the area, though its exact nature remains unknown. In this paper, we present a gap analysis of the current state of the art by comparing practices of 39 companies that work with AI systems to the seven key requirements for trustworthy AI presented in the "The Ethics Guidelines for Trustworthy Artificial Intelligence". The key finding of this paper is that there is indeed notable gap between AI ethics guidelines and practice. Especially practices considering the novel requirements for software development, requirements of societal and environmental well-being and diversity, nondiscrimination and fairness were not tackled by companies. CCS CONCEPTS• Software and its engineering → Software development process management; • Computing methodologies → Artificial intelligence; • Social and professional topics → Codes of ethics.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
hi@scite.ai
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.