מיכאל בלטקסה, הרצאה סמינריונית למגיסטר
יום שלישי, 8.10.2013, 11:30
The goal of image oversegmentation is to divide an image into several pieces or "segments", such that each segment is part of an object present in the scene. Contrary to image segmentation algorithms, an oversegmentation algorithm is allowed to output more segments than the number of objects that appear in the image. Oversegmentation is a very common preprocessing step for several common computer vision tasks. In this work we study image oversegmentation and develop new algorithms.
In the first part of our work, we analyze the local variation (LV) algorithm, which is one of the most common algorithms for image oversegmentation. We show that all the components in LV are essential to achieve high performance and then show that algorithms similar to LV can be devised by applying different statistical decisions. This leads us to introduce probabilistic local variation (pLV), a new algorithm based on statistics of natural images and on a hypothesis testing decision. pLV presents state-of-the-art results (for fine oversegmentation) while keeping the same computational complexity of the LV algorithm, and is in practice one of the fastest oversegmentation methods in the literature.
The LV and pLV algorithms are, in essence, single linkage algorithms. In the second part of our work we restrict ourselves to this type of algorithm and propose three modifications that improve their accuracy. First, we use machine learning methods to learn dissimilarities between superpixels and use these dissimilarities as distances between clusters. Then, we introduce a multistage approach to compute robust features. Finally, we add a correction mechanism which includes global information to overcome mistakes introduced by the greedy decisions. The resulting algorithms are more accurate than the pLV / LV algorithms but also slower. Therefore, the choice of a particular algorithm depends on the desired speed / accuracy tradeoff.