יונתן גייפמן, הרצאה סמינריונית לדוקטורט
יום ראשון, 2.6.2019, 15:30
Deep neural networks (DNNs) have recently shown great success in many machine learning domains and problems. However, applying these models for mission critical tasks still has many safety issues due to prediction uncertainty and prediction errors. This talk covers several related results concerning uncertainty estimation, selective classification (also known as classification with reject option) and active learning for deep neural networks.
We discuss first a selective classification method that is based on threshold over an uncertainty estimate. For this setting we show how to obtain a tight generalization bound that guarantees the maximal risk over the covered (non rejected) part of the domain. This method allows the user to calibrate a selective classifier based on his desired risk and rejection will be applied to satisfy the risk guarantee.
In the second part we will cover SelectiveNet, a DNN architecture that jointly learn the classifier and the rejection function. We show a novel architecture and constraint optimization objective for DNNs that optimize the classifier and the reject function based on user constrain over the coverage. SelectiveNet achieves superior risk-coverage tradeoff compared to any existing method on several image classification datasets.
The talk summarizes my PhD dissertation, which has been presented in the conferences ICML, NeurIPS and ICLR.