Technical Report PHD-2021-05

TR#:PHD-2021-05
Class:PHD
Title: Designing Deep Neural Networks for Efficient and Robust Inference
Authors: Chaim Baskin
Supervisors: Alex Bronstein and Avi Mendelson
PDFCurrently accessibly only within the Technion network
Abstract: Deep neural networks (DNN) have became a common tool for solving complex tasks in various fields such as computer vision, natural language processing, and recommendation systems. Despite the recent progress made in enhancing DNN performance, two major obstacles hinder their practical applicability in some applications: (i) their energy-expensive deployment on embedded platforms, and (ii) their amenability to malicious adversarial perturbations. In this thesis, we present our works focusing on different aspects of both these problems. Chapters 2 and 3 present a training-aware and post-training quantization approaches, which present the DNNs parameters and feature maps represented in fixed low-bit representations. Chapter 4 introduces a neural architectural search that allows to find optimal quantization bitwidths of neural network parameters for given complexity constraint. Chapters 5 and 6 present two entropy coding-based methods for reducing inference-time memory bandwidth requirements. The first method does not require any fine-tuning, while the second does and, in exchange, provides significant further bandwidth reduction with negligible additional complexity or accuracy reduction. Chapter 7 presents a simple framework that helps to design efficient hardware for quantized neural networks. In addition, in chapter 8 we show how quantization techniques can inspire new approaches to better cope with adversarial attacks, as well as demonstrate how an adversarially pre-trained classifier could boost adversarial robustness by smoothing between different levels of input noise. Finally, Chapter 9 introduces a simple single-node minimal attribute changing perturbation that can attack social graph-based DNNs, in a significantly more harmful way than the previously studied edge-based attacks.

CopyrightThe above paper is copyright by the Technion, Author(s), or others. Please contact the author(s) for more information

Remark: Any link to this technical report should be to this page (http://www.cs.technion.ac.il/users/wwwb/cgi-bin/tr-info.cgi/2021/PHD/PHD-2021-05), rather than to the URL of the PDF files directly. The latter URLs may change without notice.

To the list of the PHD technical reports of 2021
To the main CS technical reports page

Computer science department, Technion
admin