If AI algorithms are now pervasive in our daily life, they essentially deliver non-critical services, i.e., services which failures remain socially and economically acceptable. In order to introduce those algorithms in critical systems, new engineering practices must be defined to give a justified trust in the capability of the system to deliver the intended services. In this paper, we give an overview of the approach that we have put in place to reach this goal in the framework of the French Confiance.ai program. Based on the needs of the industrial partners of the program, we propose a model-based analysis framework capturing the two dimensions of the problem: the one related to the development and operation of the system and the one related to the trust in the system.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.