דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

event speaker icon
שי אלמלם (אונ' תל-אביב)
event date icon
יום שלישי, 25.02.2020, 11:30
event location icon
חדר 1061, בניין מאייר, הפקולטה להנדסת חשמל
The recent and on-going Deep-Learning (DL) revolution, introduces a paradigm shift in almost all disciplines of signal processing. Traditionally, Computer Vision (CV) and Image Processing (IP) methods were based on hand-crafted feature extraction from the initial optical image, and then some hand-crafted classifier/filter was defined to achieve the desired result IP/CV. The machine learning approach seeks to learn the 'classifier' stage using data (either labeled or unlabeled), i.e. the decision rule is not explicitly derived a priori from the data, but implicitly optimized using a large set of examples. DL methods take this approach to the next level, so that the feature extraction stage is also learned. In such an approach the design is done by defining a parameterized computational model, and then training it (i.e. optimizing its parameters) end-to-end, using the data. After the tremendous success of DL for IP/CV applications, these days almost every signal processing task is analyzed using such tools. In the presented work, the DL design revolution is brought one step deeper, into the optical image formation process. By considering the lens as an analog signal processor of the incoming optical wavefront (originating from the scene), the optics is modeled as an additional 'layer' in a DL model, and its parameters are optimized jointly with the 'conventional' DL layers, end-to-end. This design scheme allows the introduction of unique feature encoding in the intermediate optical image, since the lens 'has access' to information that is lost in conventional 2D imaging. Therefore, such design allows a holistic design of the entire IP/CV system. The proposed design approach will be presented with several applications: an extended Depth-Of-Field (DOF) camera; a passive depth estimation solution based on a single image from a single camera; non-uniform motion deblurring; and enhanced stereo camera with extended dynamic range and self-calibration abilities. Experimental results will be presented and discussed.

Short bio:
Shay Elmalem is a Ph.D. candidate at the Faculty of Engineering, Tel-Aviv University, under the supervision of Dr. Raja Giryes (until recently also under the supervision of the late Prof. Emanuel Marom). His research interests include computational imaging, with applications to optical design, image processing, and computer vision.