Monday, 16.12.2019, 12:30
Efficient learning requires prior knowledge (inductivebias). The algorithm designer can manually insert a prior based on hisintuition, but ideally, we would like to automatically infer the mostbeneficial prior. In meta-learning an agent extracts a ‘learned prior’ fromseveral observed learning tasks, which in turn, can be used to facilitate thelearning of new related tasks. The prior should capture the common structureacross learned tasks while allowing sufficient flexibility to adapt tonovel aspects of new tasks.
We present a framework for meta-learning that is based on generalization errorbounds, allowing us to extend various PAC-Bayes bounds to meta-learning. Wedevelop a gradient-based algorithm that minimizes an objective function derivedfrom the bounds and demonstrates its effectiveness numerically with neuralnetworks.
Ron Amit is a Ph.D. student at the Technion Electrical Engineering faculty, advised by prof. Ron Meir. His research goal is to better understand the role of prior knowledge and generalization in machine-learning.
In his research he work on theory and practice of transfer-learning, meta-learning and reinforcement learning.
In the past Ron worked on signal processing and computer vision.