Skip to content (access key 's')
Logo of Technion
Logo of CS Department
Events

The Taub Faculty of Computer Science Events and Talks

Neural Network Robustness Verification of Few-Pixel Attacks
event speaker icon
Yuval Shapira (M.Sc. Thesis Seminar)
event date icon
Wednesday, 03.12.2025, 14:30
event location icon
Taub 301 & Zoom
event speaker icon
Advisor: Dr. Dana Drachsler Cohen

While successful, neural networks have been shown to be vulnerable to adversarial attacks. In l_0 adversarial attacks, also known as few-pixel attacks, the attacker picks t pixels from the image and arbitrarily perturbs them. While many verifiers prove robustness against l_p attacks for a positive integer p, very little work deals with robustness verification for l_0 attacks. This verification introduces a combinatorial challenge because the space of pixels to perturb is discrete and of exponential size. In this talk, we present a series of papers tackling this challenging problem. We first show that l_∞ verifiers can be used for l_0 verification, and that by relying on covering designs we can significantly reduce the number of l_∞ tasks that need to be submitted to the underlying l_∞ verifier.

This idea is implemented in Calzone, the first sound and complete l_0 verifier. We then present CoVerD, which improves upon Calzone by tailoring effective but analysis-incompatible coverings to l_0 robustness verification. Lastly, we characterize the convex hull of the non-convex l_0 perturbation space. Equipped with this geometric perspective, we examine different approaches for linear bound propagation for l_0 verification, which enables us to improve the precision of CoVerD’s underlying verifier in our settings and boost its performance.