2019
DOI: 10.1609/aaai.v33i01.33012662
|View full text |Cite
|
Sign up to set email alerts
|

Certifying the True Error: Machine Learning in Coq with Verified Generalization Guarantees

Abstract: We present MLCERT, a novel system for doing practical mechanized proof of the generalization of learning procedures, bounding expected error in terms of training or test error. MLCERT is mechanized in that we prove generalization bounds inside the theorem prover Coq; thus the bounds are machine checked by Coq’s proof checker. MLCERT is practical in that we extract learning procedures defined in Coq to executable code; thus procedures with proved generalization bounds can be trained and deployed in real systems… Show more

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
2
1
1
1

Citation Types

0
21
0

Year Published

2019
2019
2023
2023

Publication Types

Select...
5
3

Relationship

0
8

Authors

Journals

citations
Cited by 21 publications
(22 citation statements)
references
References 13 publications
0
21
0
Order By: Relevance
“…The partial prior knowledge is far from a complete prior distribution, thus it is easier to obtain from DNN lifecycle activities (C4). For instance, there are studies on the generalisation error bounds, based on how the DNN was constructed, trained and verified [22,5]. We present examples on how to obtain such partial prior knowledge (G6) using evidence, e.g.…”
Section: Cbi Utilising Operational Datamentioning
confidence: 99%
“…The partial prior knowledge is far from a complete prior distribution, thus it is easier to obtain from DNN lifecycle activities (C4). For instance, there are studies on the generalisation error bounds, based on how the DNN was constructed, trained and verified [22,5]. We present examples on how to obtain such partial prior knowledge (G6) using evidence, e.g.…”
Section: Cbi Utilising Operational Datamentioning
confidence: 99%
“…Training networks using Keras made our work significantly easier, and importing the models to our libraries was an easy task. There is already existing work importing pre-trained models to theorem provers for the purposes of verification, e.g., MLCert in Coq [4]. Our approach to importing models differs from MLCert: we translate floating-point numbers to F * reals, whereas MLCert translates them to bit-vectors.…”
Section: Lessons Learnedmentioning
confidence: 99%
“…An alternative approach, that would resolve all three shortcomings identified in the previous section, is the ITP approach, as advocated in e.g. [1,2]. It amounts to first defining neural networks directly in Coq, and then proving properties about these definitions directly, thus avoiding programming in Python at the verification stage.…”
Section: Itp Approach To Neural Network Verificationmentioning
confidence: 99%
“…The answers include proofs of properties of generalisation bounds, equality of neural networks, properties of neural network architectures. Good examples are [1,2]. Usually, these verification projects are conducted in interactive theorem provers (ITPs), such as Coq [4], as they benefit from Coq's rich higher-order language and well-developed proof libraries.…”
Section: Introductionmentioning
confidence: 99%