Pixel Club: Hinge-Minimax Learner for the Intersection of K-hyperplanes

Speaker:
Margarita Osadchy (Haifa University)
Date:
Tuesday, 1.12.2015, 11:30
Place:
EE Meyer Building 1061

In this work we consider non-linear classifiers that positively classify a point when it resides in the intersection of $k$ hyperplanes. We learn these classifiers by minimizing the minimax risk of the negative training examples and the sum of hinge-loss of the positive training examples. These classifiers fit typical real-life datasets that consist of a small number of positive data points and a large number of negative data points. Such approach is computationally appealing since the majority of training examples (belonging to the negative class) are represented by the statistics of their distribution, which is used in a single constraint on the minimax risk, as opposed to SVM, in which the number of variables is equal to the size of the training set. We also provide empirical risk bounds and show that they are dimensionally independent and decay as $1/\sqrt{m}$ for $m$ samples.

We propose an efficient algorithm for training an intersection of finite number of hyperplanes and we demonstrate the effectiveness of our classifiers on real data, including letter and scene recognition. We show that these classifiers are significantly faster than kernel methods, as they only calculate $k$ inner products. In contrast, kernel SVMs produce a significant number of support vectors rendering the classification impractical.

Back to the index of events