דלג לתוכן (מקש קיצור 's')
אירועים

אירועים והרצאות בפקולטה למדעי המחשב ע"ש הנרי ומרילין טאוב

event speaker icon
יניב רומנו (הנדסת חשמל)
event date icon
יום רביעי, 30.08.2017, 11:30
event location icon
חדר 337, בניין טאוב למדעי המחשב
Modeling data has led to a revolution in the fields of signal and image processing, and machine learning. Consider the simplest restoration problem - removal of noise from an image. The recent advent of highly effective models for images (e.g. the sparse-land model) has led researchers to believe that existing denoisers are touching the ceiling in terms of restoration performance. Leveraging this impressive achievement, we propose a framework that is able to translate complicated tasks in image processing to a chain of simple denoising steps, leading to cutting edge performance.

We then proceed and concentrate on the Sparse-Land model, study its limitations and suggest different ways to overcome them. This model assumes that a signal can be represented as a linear combination of a few columns, called atoms, taken from a matrix, termed a dictionary. The learning problem, aiming to adapt the dictionary to a collection of samples, becomes computationally infeasible when dealing with high-dimensional signals. Traditionally, this problem was circumvented by learning a local model on small overlapping patches extracted from the image, and processing (e.g. denoising) these independently. However efficient, this approach is suboptimal since patches are globally connected to each other. To this end, we propose various approaches to tackle this limitation by harnessing ideas from game theory, boosting and graph theory. The suggested algorithms provide a systematic and generic way to improve the performance of existing methods, resulting in state-of-the-art performance.

A different approach to treat high dimensional signals is the convolutional sparse coding (CSC). This global model assumes that a signal can be represented as a superposition of a few local atoms (small filters) shifted to different positions. A recent work suggested a novel theoretical analysis of this global model, which is based on the observation that while being global, the CSC can be characterized and analyzed locally. Armed with this observation, we extend the classic theory of sparse representations to a multi-layered, or hierarchical, convolutional sparse compositions. The proposed ML-CSC model is shown to be tightly connected to deep-learning - a sub-field of machine learning, offering a highly effective tool for supervised classification and regression. In particular, we reveal that the core algorithm of Convolutional Neural Networks (CNN), called the forward-pass, is a pursuit algorithm aiming to decompose signals that belong to the ML-CSC into their building atoms. With this view, we are able to analyze theoretically this architecture and provide success guarantees and stable estimation of the underlying representations throughout the layers. Furthermore, identifying the weaknesses in the above scheme, we propose theoretically superior alternative to the forward-pass algorithm.