When deployed, machine-learning (ML) adoption depends on its ability to actually deliver the expected service safely, and to meet user expectations in terms of quality and continuity of service. For instance, the users expect that the technology will not do something it is not supposed to do, e.g., performing actions without informing users. Thus, the use of Artificial Intelligence (AI) in safety-critical systems such as in avionics, mobility, defense, and healthcare requires proving their trustworthiness through out its overall lifecycle (from design to deployment). Based on surveys on quality measures, characteristics and sub-characteristics of AI systems, the Confiance.ai program (www.confiance.ai) aims to identify the relevant trustworthiness attributes and their associated Key Performance Indicators (KPI) or their associated methods for assessing the induced level of trust.
Motivation for ML trustworthiness assessmentTrustworthiness is tightly related to accountability: accountability can be considered as a factor of trust or as an alternative to trust [57]. Then, in [4], dependability is used to represent the overall quality measure of a system based on four sub-attributes including security, safety, reliability, and maintainability. Thereafter, security and dependability became key attributes for computer-based system trust [8]. In 2019, the U.S. National Artificial Intelligence Research and Development Strategic Plan [54] emphasized that: "standard metrics are needed to define quantifiable measures in order to characterize AI technologies". More recently, [65] noted that "significant work is needed to establish what appropriate metrics should be to assess system performance across attributes for responsible AI and across profiles for particular applications/contexts.".The Assessment List for Trustworthy AI [1] considers 7 pillars of trustworthiness: 1) human agency and autonomy, 2) technical robustness and safety, 3) privacy and data governance, 4) transparency, 5) diversity, non discrimination and fairness, 6) societal and environmental well-being, 7) accountability. The European Commission has proposed a set of rules for AI, the AI Act [19], regulating the technology. Such proposals, which are still at the consultation stage, would apply to AI systems developed or deployed in the EU