יום שלישי, 17.2.2009, 11:30
חדר 1061, בניין מאייר, הפקולטה להנדסת חשמל
The common method of reconstructing a turbulence scene is through the creation of an artificial reference image. The reference image is usually obtained by averaging video through time. Using optical flow from that reference image to input images would give rise to such applications as: super-resolution, tracking and so forth.
However this technique suffers from several drawbacks: the resulting artificial reference frame is blurred, so calculated optical-flow fields are not precise and inhibit the results of applications based on these fields, and there is no accounting for camera motion or for motion within the field.
We show a mathematical framework to reconstruct the movie scene as would have been seen without turbulence interference, yielding an observable live video output. We then use both frames and optical flow fields to get the aforementioned applications (tracking, super-resolution, mosaics) while dealing with camera motion, and draw guidelines to deal with in-scene motion inherently.