,
Kota Fukuda
,
Zhenya Zhang
,
H.M.N. Dilum Bandara
,
Shiping Chen
,
Jianjun Zhao
,
Yulei Sui
Creative Commons Attribution 4.0 International license
The vulnerability of neural networks to adversarial perturbations has necessitated formal verification techniques that can rigorously certify the quality of neural networks. As the state-of-the-art, branch and bound (BaB) is a "divide-and-conquer" strategy that applies off-the-shelf verifiers to sub-problems for which they perform better. While BaB can identify the sub-problems that need to be split, it explores the space of these sub-problems in a naive "first-come-first-served" manner, thereby suffering from an issue of inefficiency in reaching a verification conclusion. To bridge this gap, we introduce an order over different sub-problems produced by BaB, concerning their different likelihoods of containing counterexamples. Based on this order, we propose a novel verification framework Oliva that explores the sub-problem space by prioritizing those sub-problems that are more likely to find counterexamples, to efficiently reach the conclusion of the verification. Even if no counterexample can be found in any sub-problem, it only changes the order of visiting different sub-problem and so will not lead to a performance degradation. Specifically, Oliva has two variants, including Oliva^GR, a greedy strategy that always prioritizes the sub-problems that are more likely to find counterexamples, and Oliva^SA, a balanced strategy inspired by simulated annealing that gradually shifts from exploration to exploitation to locate the globally optimal sub-problems. We experimentally evaluate the performance of Oliva on 690 verification problems spanning over 5 models with datasets MNIST and CIFAR-10. Compared to the state-of-the-art approaches, we demonstrate the speedup of Oliva for up to 25× in MNIST, and up to 80× in CIFAR-10.
@Article{zhang_et_al:DARTS.11.2.11,
author = {Zhang, Guanqin and Fukuda, Kota and Zhang, Zhenya and Bandara, H.M.N. Dilum and Chen, Shiping and Zhao, Jianjun and Sui, Yulei},
title = {{Efficient Neural Network Verification via Order Leading Exploration of Branch-and-Bound Trees (Artifact)}},
pages = {11:1--11:3},
journal = {Dagstuhl Artifacts Series},
ISSN = {2509-8195},
year = {2025},
volume = {11},
number = {2},
editor = {Zhang, Guanqin and Fukuda, Kota and Zhang, Zhenya and Bandara, H.M.N. Dilum and Chen, Shiping and Zhao, Jianjun and Sui, Yulei},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/DARTS.11.2.11},
URN = {urn:nbn:de:0030-drops-233545},
doi = {10.4230/DARTS.11.2.11},
annote = {Keywords: neural network verification, branch and bound, counterexample potentiality, simulated annealing, stochastic optimization}
}
32adfa81d449958fddf85b01a5f5ea96
(Get MD5 Sum)
The artifact has been evaluated as described in the ECOOP 2025 Call for Artifacts and the ACM Artifact Review and Badging Policy.