Prof. Kenichi Kanatani (Computer Science, Okayama University, Okayama, Japan)
Tuesday, 29.12.2009, 11:30
Fitting an algebraic equation to observed data is one of the first
steps of many computer vision applications. For example, we fit lines
and curves to points in 2D and planes and surfaces in 3D. Computing
computing the fundamental matrix or the homography matrix can also be
viewed as fitting in a high-dimensional space.
A naive way for this is the least squares, also known as ``algebraic
distance minimization'', minimizing the sum of squares of terms that
should be zero in the absence of noise, and the solution is obtained
by algebraic operations without iterations. However, this is known to
be of low accuracy, and known high-accuracy methods are based on
maximum likelihood (ML).
The problem of all ML-based methods, including bundle adjustment, FNS,
and HEIV, is that they require iterations, which may not converge in
the presence of large noise. Also, an appropriate initial guess is
necessary to start the iterations. Thus, accurate algebraic procedure
that yields high accuracy solution, even though it is not exactly
optimal, is very much desired.
In this talk, we propose a new approach to improve the algebraic
fitting based on least squares to the utmost accuracy, which we call
''hyperaccuracy''. We analyze the error of a general parameterized
algebraic fitting scheme up to high order noise terms and adjust the
parameters so that the resulting solution has the highest accuracy.
We apply our analysis to ellipse fitting and homography computation.
By simulation, we demonstrate that our approach indeed produces an
accurate solution even in highly noisy situations where ML-based
iterations fail to converge.