This article presents a new machine learning (ML) development lifecycle which will constitute the core of the new aeronautical standard on ML called AS6983, jointly being developed by working group WG-114/G34 of EUROCAE and SAE. The article also presents a survey of several existing standards and guidelines related to ML in aeronautics, automotive, and industrial domains by comparing and contrasting their scope, purpose, and results. Standards and guidelines reviewed include the European Union Aviation Safety Agency (EASA) Concept Paper, the DEEL (DEpendable and Explainable Learning) white paper “Machine Learning in Certified Systems”, Aerospace Vehicle System Institute (AVSI) Authorization for Expenditure (AFE) 87 report on Machine Learning, Guidance on the Assurance of Machine Learning for use in Autonomous Systems (AMLAS), Laboratoire National de Metrologie et d’Essais (LNE) Certification Standard of Processes for AI, the Underwriters Laboratories (UL) 4600 Safety Standard for Autonomous Vehicles, and the paper on Assuring the Machine Learning Lifecycle. These standards and guidelines are examined from the perspective of the learning assurance objectives they propose, and the means of evaluation and compliance for achieving these learning objectives. The reference used for comparison is the list of learning assurance objectives defined within the framework of AS6983 development. From this comparative analysis, and based on a coverage criterion defined in this article, only three (3) standards and guidelines exceed 50% coverage of the Machine Learning Development Lifecycle (MLDL) learning assurance objectives baseline. The next steps of this work are to update the AS6983 learning assurance objectives and improve the associated means of compliance to approach a coverage score of 100%, and offer a certification-based process to other domains that could benefit from the AS6983 standard.
This paper presents a quantitative approach to demonstrate the robustness of neural networks for tabular data. These data form the backbone of the data structures found in most industrial applications. We analyse the effect of various widely used techniques we encounter in neural network practice, such as regularization of weights, addition of noise to the data, and positivity constraints. This analysis is performed by using three state-of-the-art techniques, which provide mathematical proofs of robustness in terms of Lipschitz constant for feed-forward networks. The experiments are carried out on two prediction tasks and one classification task. Our work brings insights into building robust neural network architectures for safety critical systems that require certification or approval from a competent authority.
The stability of neural networks with respect to adversarial perturbations has been extensively studied. One of the main strategies consist of quantifying the Lipschitz regularity of neural networks. In this paper, we introduce a multivariate Lipschitz constant-based stability analysis of fully connected neural networks allowing us to capture the influence of each input or group of inputs on the neural network stability. Our approach relies on a suitable re-normalization of the input space, with the objective to perform a more precise analysis than the one provided by a global Lipschitz constant. We investigate the mathematical properties of the proposed multivariate Lipschitz analysis and show its usefulness in better understanding the sensitivity of the neural network with regard to groups of inputs. We display the results of this analysis by a new representation designed for machine learning practitioners and safety engineers termed as a Lipschitz star. The Lipschitz star is a graphical and practical tool to analyze the sensitivity of a neural network model during its development, with regard to different combinations of inputs. By leveraging this tool, we show that it is possible to build robust-by-design models using spectral normalization techniques for controlling the stability of a neural network, given a safety Lipschitz target. Thanks to our multivariate Lipschitz analysis, we can also measure the efficiency of adversarial training in inference tasks. We perform experiments on various open access tabular datasets, and also on a real Thales Air Mobility industrial application subject to certification requirements.
scite is a Brooklyn-based organization that helps researchers better discover and understand research articles through Smart Citations–citations that display the context of the citation and describe whether the article provides supporting or contrasting evidence. scite is used by students and researchers from around the world and is funded in part by the National Science Foundation and the National Institute on Drug Abuse of the National Institutes of Health.
customersupport@researchsolutions.com
10624 S. Eastern Ave., Ste. A-614
Henderson, NV 89052, USA
Copyright © 2024 scite LLC. All rights reserved.
Made with 💙 for researchers
Part of the Research Solutions Family.