Ron Rubisnstein (CS, Technion)
Tuesday, 5.10.2010, 11:30
Signal models are used for a wide array of signal and image processing tasks – from deconvolution, denoising, and interpolation to source separation, super-resolution, and compression. One of the most common modeling approaches utilizes a dictionary of atomic signals, describing the set of elementary behaviors observed in the signals of interest. The dictionary is used either as an analysis operator, measuring the inner-products of the atoms and the input signal, or as a synthesis operator, reproducing the input signal as a linear combination of the atoms. In both cases, the driving force of the model is sparsity, which requires that the representation coefficients be mostly close or equal to zero. In this talk we begin by presenting the analysis versus synthesis question, which has been gaining interest in recent years. We provide some surprising theoretical results which show that, in contrast to common belief, the two approaches may substantially differ, and discuss the implications of these results. In practice, the success of these models depends on the choice of the sparsifying dictionary. Most dictionaries emerge from one of two sources – either a mathematical approximation of the signal data, which leads to structured and efficient dictionaries, or example-based training, which produces adaptive and highly tuned dictionaries. In the main part of this talk, we present the sparse dictionary structure, which aims to narrow the gap between these two options. Focusing on the synthesis case, we describe the new structure and its advantages, and discuss the Sparse K-SVD algorithm which learns it from examples. We discuss the application of the new structure to volume denoising and image compression, where we demonstrate, for the first time, feasibility of a fully data-adaptive scheme for general-purpose compression. Several additional applications are briefly described.
This lecture summarizes portions of a PhD. research by Ron Rubisnstein, under the supervision of Prof. Michael Elad.