Ultrawide Foveated Video Extrapolation

Description: pdf-view_icon.gif Tamar Avraham and Yoav Y. Schechner, IEEE Journal of Selected Topics in Signal Processing, vol.5, no. 3, 2011: in a Special Issue on Recent Advances in Processing for Consumer Displays

Consider the task of creating a very wide visual extrapolation, i.e., a synthetic continuation of the field of view much beyond the acquired data. Existing related methods deal mainly with filling in holes in images and video. These methods are very time consuming and often prone to noticeable artifacts. The probability for artifacts grows as the synthesized regions become more distant from the domain of the raw video. Therefore, such methods do not lend themselves easily to very large extrapolations. We suggest an approach to enable this task. First, an improved completion algorithm that rejects peripheral distractions significantly reduces attention-drawing artifacts. Second, a foveated video extrapolation approach exploits weaknesses of the human visual system, in order to enable efficient extrapolation of video, while further reducing attention-drawing artifacts. Consider a screen showing the raw video. Let the region beyond the raw video domain reside outside the field corresponding to the viewer's fovea. Then, the farther the extrapolated synthetic region is from the raw field of view, the more the spatial resolution can be reduced. This enables image synthesis using spatial blocks that become gradually coarser and significantly fewer (per unit area), as the extrapolated region expands. The substantial reduction in the number of synthesized blocks notably speeds the process and increases the probability of success without distracting artifacts. Furthermore, results supporting the foveated approach are obtained by a user study.

 

 

 

To download a movie introducing the method click here.

To download a few video and image examples click here. See the Readme file. These videos can be used in publications provided the above paper is cited.

 

Multi-Scale Ultrawide Foveated Video Extrapolation

Description: pdf-view_icon.gif Amit Aides, Tamar Avraham and Yoav Y. Schechner, IEEE-ICCP, 2011

The first suggested method for foveated extrapolation method, described above (IEEE-JSTSP 2011), was improved by Amit Aides, who implemented a solution that takes a multi-scale approach to the wide extrapolation problem.

 

 

 

To download a few video examples comparing results of the inside-in method (IEEE-JSTSP 2011) and the outside-in method (IEEE-ICCP 2011), as well as a description of a user study click here. These videos can be used in publications provided the above paper is cited.