Shai Avidan (Tel Aviv University)
This talk is more or less about depth. In the "more" part of the talk I will describe a method to recover depth and motion of a dynamic event captured by two snapshots of a camera array. The key idea is to represent the scene as a synthetic aperture volume and reduce the problem to volume registration that was already done in the medical imaging community. Camera arrays capture huge amounts of data and in the "less" part of the talk I will ask the question: how many images do we need to recover depth? It turns out that the answer is much less than two.