אלון גונן (אונ' פרינסטון)
יום שלישי, 11.6.2019, 15:30
The last decade saw a tremendous empirical success of machine learning algorithms, which often can not be explained theoretically using a worst-case analysis. Therefore, it is a great challenge to identify reasonable assumptions under which the theory is aligned with the practice. In this talk I will advocate learning in an oracle model, which provides an elegant and extremely useful methodology for this purpose. Namely, instead of explicitly characterizing some desired "niceness" properties of the data, we assume an access to an oracle for relatively simpler tasks. As we shall see, seemingly innocents (and definitely reasonable) modifications to the oracle can lead to trackable solutions to fundamental learning tasks. Furthermore, the oracle model often simplifies the study of new fields in machine learning by forming connections to well-established fields.
We demonstrate these ideas using three results: i) An efficient algorithm for non-convex online learning using an optimization oracle. b) A faster boosting algorithm using a "simple" weak learner. iii) An efficient reduction from online to private learning.
Joint works with Naman Agarwal, Noga Alon, Elad Hazan, and Shay Moran.