Skip to content (access key 's')
Logo of Technion
Logo of CS Department
Logo of CS4People
Events

The Taub Faculty of Computer Science Events and Talks

How to Improve or Attack Deep Uncertainty Estimation Performance
event speaker icon
Ido Galil (M.Sc. Thesis Seminar)
event date icon
Wednesday, 17.11.2021, 11:00
event location icon
Zoom Lecture: 9505136835
event speaker icon
Advisor: Prof. Ran El-Yaniv
Deep neural networks (DNNs) must be able to estimate the uncertainty of their predictions when deployed for risk-sensitive tasks. In the first part of this talk, we present a comprehensive study that evaluates the uncertainty performance of 484 deep ImageNet classification models. We identify numerous and previously unknown factors that affect uncertainty estimation. We find that distillation based training regimes consistently yield better uncertainty estimations than other training schemes such as vanilla training, pretraining on a larger dataset and adversarial training. We discovered that architectural differences are significant. For example, we discovered an unprecedented 99% top-1 selective accuracy at 47% coverage (and 95% top-1 accuracy at 80%) for a ViT model, whereas a competing EfficientNet-V2-XL cannot obtain these accuracy constraints at any level of coverage. In the second part of the talk, we present a novel adversarial attack, to be published in NeurIPS 2021. Unlike standard adversarial attacks, the new attack does not lead to incorrect predictions but instead cripples the network's ability to estimate uncertainty. The result is that the DNN is more confident about its incorrect predictions after the attack than it is about its correct ones without having its accuracy reduced. We report on successful attacks on three of the most popular uncertainty estimation methods: the vanilla softmax score, Deep Ensembles, and MC-Dropout. Several contemporary neural architectures such as MobileNetV2 and EfficientNetB0 were used to test the proposed attack, all trained to classify the challenging ImageNet dataset.