Tuesday, 18.12.2007, 11:30
The digital photography revolution has greatly facilitated the way in which we take and share pictures.
However, it has mostly relied on a rigid imaging model inherited from traditional photography.
Computational photography goes one step further and exploits digital technology to enable arbitrary computation between the light array and the final image or video.
Such computation can overcome limitations of the imaging hardware and enable new applications.
It can also enable new imaging setups and post-processing tools that empower users to enhance and interact with their images and videos.
This talk will describe new imaging architectures as well as software techniques that leverage computation to facilitate the extraction of information and enhance images. In particular, I will discuss post exposure manipulations like reflections removal, colorization and matting. I will also describe the coded aperture camera, a new simple modification of a lens as well as new inference techniques that enable the capture of both depth and a full-resolution image from a single picture.
I will argue that the core of computational photography research is in the fact that images are more than arbitrary random arrays of numbers, and the success of such algorithms depends on the ability to model the strong low level statistics of images.
Parts of this research were done in collaboration with Fredo Durand, Rob Fergus, Bill Freeman, Dani Lischinski, Alex Rav Acha, Yair Weiss and Assaf Zomet.