2022
DOI: 10.48550/arxiv.2210.04871
|View full text |Cite
Preprint
|
Sign up to set email alerts
|

Certified Training: Small Boxes are All You Need

Help me understand this report

Search citation statements

Order By: Relevance

Paper Sections

Select...
1
1

Citation Types

0
3
0

Year Published

2023
2023
2024
2024

Publication Types

Select...
3
1

Relationship

1
3

Authors

Journals

citations
Cited by 4 publications
(3 citation statements)
references
References 0 publications
0
3
0
Order By: Relevance
“…Orthogonally, the training of certifiably robust networks remains an open problem. Despite significant progress over recent years [6,19,35,37,43,45,65], networks trained specifically to exhibit provable robustness guarantees still suffer from severely degraded standard accuracy. Therefore, most benchmarks considered in the VNN-COMP are based on networks trained without consideration for later certification.…”
Section: Remaining Challengesmentioning
confidence: 99%
“…Orthogonally, the training of certifiably robust networks remains an open problem. Despite significant progress over recent years [6,19,35,37,43,45,65], networks trained specifically to exhibit provable robustness guarantees still suffer from severely degraded standard accuracy. Therefore, most benchmarks considered in the VNN-COMP are based on networks trained without consideration for later certification.…”
Section: Remaining Challengesmentioning
confidence: 99%
“…Accurate bounds can significantly reduce the complexity and computational effort required during the certification process, facilitating more efficient and dependable evaluations of the network's behavior in diverse and challenging scenarios. Moreover, computing such bounds has opened the door for a new set of "certified training" algorithms (Zhang et al 2022;Lyu et al 2021;Müller et al 2022b) where these bounds are used as a regularizer that penalizes the worst-case violation of robustness or fairness, which leads to training NNs with favorable properties. While computing such lower/upper bounds is crucial, current techniques in computing lower/upper bounds on the NN outputs are either computationally efficient but result in loose lower/upper bounds or compute tight bounds but are computationally expensive.…”
Section: Introductionmentioning
confidence: 99%
“…Algorithm timeout was set to 20s per instance.QNNs and might enhance QA-IBP-based training procedures as well. Moreover, further improvements may be feasible by adapting recent advances in IBP-based training methods for non-quantized neural networks(Müller et al 2022) to our quantized IBP variant.…”
mentioning
confidence: 99%