Technion - Israel Institute of Technology Computer Science Department Center for Graphics and Geometric Computing

Novel View Generation

[Introduction] [Solution] [Implementation] [User's Manual] [Results] [ References] [Authors]

# Introduction:

This project focused on solving the problem of synthesizing novel images, from arbitrary viewing positions, given two reference images from the scene.
In order to generate the third image we use the camera's parameters (focal length, center ext.), camera's position (translation and rotation components) and corresponding points between the two source images.

[ top]

# Solution:

Our solution involves three steps.

First, we compute the fundamental matrix between the two source images using an eight-points algorithm. This allows us to derive the relative translation and rotation between the two images. Using the derived translation and rotation plus what we already know (the relative translation and rotation from the first source image to the third desired image), we compute the fundamental matrix from the second source image to the third desired image.
Second, we compute the fundamental matrix between the first source image to the third desired image (in our case here, the relative translation and rotation between these two images have already been given as inputs).
The last phase is to take the two fundamental matrices and the corresponding points and compute the third desired image.
Note that all the computations are made at the third image's axes system. All synthetic images must be in a perspective mode.
The solution's scheme:

After generating the points of the desired image, we triangulate them according to Delaunay triangulation and match the texture according to the corresponding points.

Additional details (in Hebrew) see doc , pdf .

[ top]

Implementation:

1. Matlab code includes implementations of the problem's solution (function main) as explained above.
2. The viewer program is used to generate the input to the Matlab code: loads vrml files and generates images in different positions in perspective mode with texture.

[ top]

# User's Manual:

Project implementation here.
1. Unzip code. zip file.
2. Run code/input/SimpleViewer.exe and load code/input/test.wrl example file.
4. Build three different images :

4.1 Choose the position of the object by using and buttons.
4.2 Use buttons to save current position. (First one - saves image in .bmp format and writes current object position to output.m file; second one - writes Calibration and Perspective projection matrices to an output.m file.)

Note : Between each pair of image the translation must be in a non collinear direction.
The SimpleViewer creates an output.m file which contains the following data for the Matlab program:

K - Calibration matrix
PM - Perspective projection matrix
TMi - Transformation matrices
P<l,h,ld,hd>ki - points to different accuracies and densities. These are the sets of 2D points calculated by SimpleViewer from the
Pi=K*PM*TMi*P , where P - is an original 3D object's points.

5. Open Matlab in directory code/
6. Open main.m and update the following parameters:
• Choose the point's accuracy level by updating the use_double parameter :
0- integer accuracy.
1 - double accuracy.
• Choose the image's corresponding points density by updating the is_dense parameter :
0 - less corresponding points at the image.
1- more corresponding points at the image.
• Update the use_noise parameter in order to apply noise to the corresponding points :
0 - no noise.
• Set the noise amplitude value (in pixels) by updating the noise_const parameter .

7. Run main.m function .
8. Results are presented
in the code/output/ directory :
test1.jpg : image1.
test2.jpg : source image2.
test3.jpg : desired image.
pos1_tri3_texture1.wrl : Delaunay triangulation of the desired image applied on the source image1 .
pos2_tri3_texture2.wrl : Delaunay triangulation of the desired image applied on the source image 2.
pos3_tri3_texture1.wrl : The desired image received by source image 1.
pos3_tri3_texture2.wrl : The desired image received by source image 2.
pos3_tri3_join_case_map.wrl : this map is the first phase in our decision how to merge the results we get from source image 1 and 2 :
red - take current triangle's texture from source 1.
green - take current triangle's texture from source 2.
black - current triangle's texture can be taken both from sources 1 and 2.
white - can't take texture from any sources.
pos3_tri3_join_map.wrl : This map is the second phase in our decision how to merge the results we get from source images 1 and 2.
red - take current triangle's texture from source 1.
green - take current triangle's texture from source 2.
white - can't take texture from any sources.
pos3_tri3_joined.wrl : The desired image received by source images 1 and 2.
[ top]

# Results:

Example 1: Points corresponding in double accuracy and dense sampling. More details here.
 The desired image The generated image.

Example 2: Points corresponding in integer accuracy and regular sampling with noise. More details here.
 The desired image The generated image.
[ top]

# References:

Multiple View Geometry in Computer Vision by Richard Hartley and Andrew Zisserman, Cambridge University Press 2000 .

[
top]

# Authors:

Orit Cohen: oritc@cs.technion.ac.il

This work was done as a semester project in the Computer Graphics Course of the Faculty of Computer Science under the supervision of Prof. Craig (Chaim) Gotsman and Dr. Ilan Shimshoni

[ top]

October 2002