[ back]

Points corresponding in integer accuracy, regular sampling with noise (noise amplitude = 1 pixel).

the noise function we use:
2*noise amplitude *rand  - 0.5.
(the rand  function gives random numbers chosen from
 a uniform distribution on the interval (0.0,1.0)).

 

Input:

Camera position view:

top view side of view VRML code here

camera parameters and corresponding points here.
Source image 1 (red camera)
test1.jpg
Source image 2 (green camera)
test2.jpg
Desired image (blue camera)
test3.jpg

Output:

Corresponding points' position estimation (calculation error is 0.0225)
P3 - original points of the third image P3B - calculated points both P3 and P3B are on one plot; if you don't see red points, it means that the placement is very good
plot_P3 plot_P3B plot_P3B_P3

Delaunay triangulation of the points generated by the 8-points algorithm.

Delaunay triangulation of the desired image applied to source images 1 and 2, respectively.

 

pos1_tri3_texture1.wrl pos2_tri3_texture2.wrl

The desired Image received from source images 1 and 2, respectively.

pos3_tri3_texture1.wrl pos3_tri3_texture1.wrl VRML code here pos3_tri3_texture2.wrl pos3_tri3_texture2.wrl VRML code here

The first decision map on how to merge the results we get from source images 1 and 2 :

pos3_tri3_join_case_map.wrl red - take current triangle's texture from source 1.
green - take current triangle's texture from source 2.
black - current triangle's texture can be taken both from sources 1 and 2.
white - can't take texture from any sources.

The second decision map showing  how to merge the results we get from source images 1 and 2 :

pos3_tri3_join_map.wrl red - take current triangle's texture from source 1.
green - take current triangle's texture from source 2.
white - can't take texture from any sources.

The desired image received both by source images 1 and 2 both .

pos3_tri3_joined.wrl pos3_tri3_joined.wrl VRML code here

[ back]