If AI algorithms are now pervasive in our daily life, they essentially deliver non-critical services, i.e., services which failures remain socially and economically acceptable. In order to introduce those algorithms in critical systems, new engineering practices must be defined to give a justified trust in the capability of the system to deliver the intended services. In this paper, we give an overview of the approach that we have put in place to reach this goal in the framework of the French Confiance.ai program. Based on the needs of the industrial partners of the program, we propose a model-based analysis framework capturing the two dimensions of the problem: the one related to the development and operation of the system and the one related to the trust in the system.
This workshop focuses on methods, tools, and techniques to design and develop Trustworthy Autonomous Systems (TAS). TAS is an emerging area of interactive systems that is expanding the scope and remit of engineering. At every scale, making autonomous systems trustworthy is a collective task that requires a multidisciplinary team to work together to understand trust design requirements and provide effective and creative solutions. TAS introduce unique challenges in the design and development of interactive systems because they may have the capacity to learn and evolve, they may need to make decisions or take actions independently with little or no human oversight, and they will be deployed in quite different cultural and regulatory environments. TAS engineers need robust design methods, tools, and techniques to meet diverse TAS requirements and objectives. Our prior research argued for TAS engineers to develop skills in three core areas: soft, strategic, and technical [1]. However, little has been done to flesh out the specific methods, tools, and techniques that TAS engineers should draw on. This workshop intends to invite interactive systems experts to contribute promising design methods, tools, and techniques -particularly in the area of user/actor and design requirements modelling. The workshop aims to present innovative modelling techniques, test these approaches through discussion, think about the main challenges, refine TAS required skills and steer the overarching strategy in this new field for the future.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.