דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

event speaker icon
אסף שוחר (מכון ויצמן למדע)
event date icon
יום שלישי, 17.03.2020, 11:30
event location icon
חדר 1061, בניין מאייר, הפקולטה להנדסת חשמל
Deep Learning has always been divided into two phases: Training and Inference. Deep networks are mostly used with large data-sets, both under supervised (Classification, Regression etc.) or unsupervised (Autoencoders, GANs) regimes. Such networks are only applicable to the type of data they were trained for, and do not exploit the internal statistics of a single datum.

We introduce Deep Internal Learning; We train a signal-specific network, we do it at test-time and on the test-input only, in an unsupervised manner (no label or ground-truth). In this regime, training is a part of the inference, no additional data or prior training is taking place. This is possible due to the fact that one single instance (be it image, video or audio) actually contains a lot of data when internal statistics are exploited. In a series of papers from the last year, that will be reviewed throughout the talk, I will demonstrate how we applied this framework for various challenges: Super-Resolution, Segmentation, Dehazing, Transparency-Separation, Watermark removal. I will also show how this approach can be incorporated to Generative Adversarial Networks by training a GAN on a single image for the challenge of retargeting.

Short Bio:
Assaf Shocher is a PhD candidate at the Weizmann Institute of Science, adviced by Prof. Michal Irani. He received his B.Sc. in Physics and B.Sc. in EE (with certicate of excellence) from Ben-Gurion University. He received his M.Sc in Math&CS (Dean's prize for outstanindg students) from the Weizmann Institute. Assaf has co-founded a fintech startup and worked as a Machine Learning team leader in several startups. Recently, Assaf did a Google research internship at the Viscam team lead by William T. Freeman. Assaf's research focuses on Deep-Learning and Computer Vision.