Home › Research - older

Algorithms and bounds

Setting processing priorities is often required in situations where a person or a machine analyze a scene. Taking a stochastic approach, we modeled object identities as a set of correlated random variables, which led to effective object search algorithm. A related model was used for characterizing the inherent difficulty of search tasks using a metric-space cover (similar to Kolmogorov's epsilon-covering), and derived (tight) bounds on the performance of all search algorithms.

T. Avraham and M. Lindenbaum, Attention-based Dynamic Visual Search using Inner-scene Similarity: Algorithms and Bounds. IEEE Trans. Pattern Analysis and Machine Intelligence, 28(2):251-264, 2006. See also ECCV04.

T. Avraham and M. Lindenbaum, Esaliency (Extended Saliency): Meaningful Attention Using Stochastic Image Modeling. IEEE Trans. Pattern Analysis and Machine Intelligence, 32(4), 693-708, 2010.


It turned out that the predictions developed in the quantitative computer vision context are relevant also in the human vision context and that a search task is harder if its associated metric cover is larger.

Y. Yeshurun, T. Avraham and M. Lindenbaum, Predicting Visual-search Performance by Quantifying Stimuli Similarities. Journal of Vision, 8(4):1{22, 2008.

Analysis (and some algorithms) of Object Recognition Tasks

Analysis of Object Recognition Tasks

In recognition performance, we used a simple model for edge detection (see figure) to predict how many edge points is required to decide, with some pre-specified reliability, that some familiar object is in the scene. The answer is algorithm independent and depends on factors such as object similarity, transformation class, noise, and reliability, and quantify many intuitive observations, such as "similar objects are harder to discriminate," or "higher reliability requires more data." They are useful for setting the required data subset size when designing a new algorithm, for analyzing reported results by comparing them to the theoretical bounds, for inferring performance when conditions are changed, and for ranking different recognition tasks according to their difficulty.

M. Lindenbaum, Bounds on Shape Recognition Performance. IEEE Trans. on PAMI-17, No. 7, pp. 666-680, 1995.

M. Lindenbaum, An Integrated Model for Evaluating the Amount of Data Required for Reliable Recognition, IEEE Trans. on PAMI-19, No. 11, pp. 1251-1264, 1997.

Similar (but less tight) results may be derived by observing the inherent similarities between recognition and learning task, using the PAC (Probably Approximately Correct) theoretical learning model, and finding the VC-dimension of a related class.

M. Lindenbaum and S. Ben-David, Applying VC-dimension Analysis to Object Recognition. Proceedings of the 3rd European Conference on Computer Vision, 1994, pp. 239-240

Grouping-based Nonadditive Verification

Perceptual grouping can be informative for object recognition as well. Intuitively, we can say that a good hypothesis should also be a good group. This criterion turns out to be powerful and significantly improves the reliability of the verification process.

A. Amir and M. Lindenbaum, Grouping-based Nonadditive Verification. IEEE Trans. on PAMI-20, No. 2, pp. 186-192, 1998.

Comparing Images under Varying Illumination or Why Gabor Jets Are So Effective?

Image comparing under variable illumination usually relies on gradient directions and therefore fails for images of smooth objects, where the gradient directions are noisy and unreliable. We model the smooth surfaces and the implied images as correlated Gaussian processes, treat the intensity difference due to illumination change as colored noise. An image comparison based on whitening is proposed and found to work much better that gradient direction based methods. Interestingly, the commonly used Gabor jets may be interpreted as a combination of gradient direction comparison and whitening. This way, they perform well for both rough and smooth surfaces.

M. Osadchy, M. Lindenbaum, and D. Jacobs Whitening for Photometric Comparison of Smooth Surfaces under Varying Illumination. ECCV04, volume IV, pages 217-228, 2004.

M. Osadchy, D. Jacobs, and M. Lindenbaum Surface Dependent Representations for Illumination Insensitive Image Comparison. IEEE Trans. Pattern Analysis and Machine Intelligence, 29(1):98{111, 2007.

Perceptual Organization - Analysis and algorithms

Mid Level Perceptual organization processes induce some structure into the image. Grouping processes, for example, partition an image (or some partial representation of it, such as an edge map) into regions such that all data elements in one region correspond to one imaged object. As in the recognition problem, my interest in grouping is mainly in modeling and prediction. That is, my work is concerned with suggesting probabilistic models, assessing their validity, and using them to derive predictions on grouping performance with specific algorithms.

Maximum Likelihood (Generic) Grouping Processes and Their Analysis

We proposed a generic grouping method, based on maximum likelihood decision, and provided results relating the expected grouping quality to the cue reliability and to the computational effort invested.

A. Amir and M. Lindenbaum, Quantitative Analysis of Grouping Processes. IEEE Trans. on PAMI-20, No. 2, pp. 168-185, 1998. (A shorter version appeared in ECCV, pp. 371-384, 1996.)
An Analysis of Connected Components Grouping Processes

We consider the extremely fast connected components grouping process and use probabilistic models to analyze its performance. We derived the expected number of addition errors and the group fragmentation rate and showed that they depend on a few inherent and intuitive parameters. The analytic results are supported by implementing the algorithm and testing it on synthetic and real images.

A. Berengolts and M. Lindenbaum, On the Performance of Connected Components Grouping. IJCV 41(3), pp. 195-216, 2001.
A Probabilistic Interpretation of the Saliency Network

The saliency algorithm (Shaashua and Ullman) finds image curves, maximizing some combination of length of the curve, smoothness, and continuity. We proposed a probabilsitic interpretation that shows that the [SU] algorithm maximizes expected length. We provide several generalizations and then use the probabilistic model to derive analytically, distributions of saliency for the main curves ("figure") and the rest of the image ("background"). The derived distributions were used to show how to set thresholds on the saliency for deciding optimally between figure and background, how to choose saliency cues, and how to estimate bounds on saliency performance.

A. Berengolts and M. Lindenbaum, On the Distribution of Saliency. IEEE Trans. Pattern Analysis and Machine Intelligence, 28(12):1973–1990, 2006. See also ECCV00 and CVPR04 related publications.

An Information-based Measure for Grouping Quality

The evaluation of grouping results is not straightforward and is often heuristic. We proposed a method for measuring grouping quality, based on the following observation: a better grouping result provides more information about the true, unknown grouping. The amount of information is evaluated by the number of queries required to specify the true grouping, given the algorithm's result. We relate this number to common information theory terms such as surprise and entropy, show that it is not biased to different types of grouping errors, and it correlates well with human judgment.

E.A. Engbers, M. Lindenbaum and A.W.M. Smeulders, An Information-Based Measure for Grouping Quality. ECCV (3), pp. 392-404, 2004.

Figure Ground separation

To be added

Image processing

Adaptive and progressive image sampling

What is the best method for progressively sampling an image? We proposed a progressive sampling algorithm, which retains (deterministic) uniformity with the increased density and places the samples irregularly, thus providing excellent anti-aliasing properties. The uniformity metric can be either image-independent or image-dependent, so an adaptive sampling scheme (such as the one in the image above) can be derived as well.

Yuval Eldar, Michael Lindenbaum, Moshe Porat and Yehoshua Y. Zeevi, The Farthest Point Strategy for Progressive Image Sampling, IEEE Transactions on Image Processing, 6, No. 9, pp. 1305-1315, 1997.

A generalized sampling scheme where every sample is obtained by an inner product between the image and some mask was recently developed.

Zvi Devir and Michael Lindenbaum, Blind Adaptive Sampling of Images, IEEE Transactions on Image Processing 21(4): 1478-1487 (2012).

Computational Tools

Fast PCA

PCA is very useful but sometimes computationally prohibitive. We reduced both the computational and storage requirements by exploiting the spatial and temporal correlations common in many image sets.

H. Murase and M. Lindenbaum, Spatial Temporal Adaptive Method for Partial Eigenstructure Decomposition of Large Image Matrices, IEEE Trans. on Image Processing, IP-4, No. 5, pp. 620-629, 1995.

We developed a special variant for sequential data (e.g. image stream). The algorithm is faster in typical applications than the usual batch algorithm and allows for dynamic updating of object databases for recognition.

A. Levy and M. Lindenbaum, sequential Karhunen-Loeve Basis Extraction, IEEE Transactions on Image processing 9(8), pp. 1371-1374, 2000. (An older is in ICIP99.)

The software package is unfortunately no longer functional.

Non-negative Matrix factorization under the Earth movers distance

To be added


Some Results in Learning Theory

S. Ben-David and M. Lindenbaum, Localization vs. Identification of Semi-Algebraic Sets. Machine Learning 32(3): 207-224 (1998). (Also in COLT93.)

S. Ben-David and M. Lindenbaum, Learning Distributions by their Density Levels - A Paradigm for Learning Without a Teacher. Journal of Computer and Systems Sciences, 1997, pp. 171-182.

Example Selection in the Context of Nearest Neighbor Classifiers

Often, obtaining examples for learning is easier than labeling them. In such cases, it is beneficial to actively select the examples for labeling with the goal of reducing the overall labeling cost. We proposed a lookahead selective sampling process, whose goal is to select the examples with the highest utility and which uses a random field model for the required estimate of the possible labels' probability. We found empirically that the proposed algorithm outperforms other selective sampling methods in terms of the average error rate and performance stability.

M. Lindenbaum, S. Markovich, and D. Rusakov, Selective Sampling for Nearest Neighbor Classifiers. . Machine Learning 54(2): 125-152 (2004) (Also AAAI99).

Computational Robotics

Geometric Probing

My thesis, supervised by Freddy Bruckstein, was concerned with algorithmic robotics issues. We considered geometric probing tasks, where one is asked to provide a strategy for finding an object's shape using a small number of simple measurements (probes). Part of my thesis were optimal strategies for parallel probing and corresponding lower bounds. The other part was to extend traditional geometric probing, limited to polygonal objects, to smooth object.

M. Lindenbaum and A. Bruckstein, Parallel Strategies for Geometric Probing . Journal of Algorithms 13, pp. 320-349, 1992.

M. Lindenbaum and A. Bruckstein, Blind Approximation of Planar Convex Shapes, IEEE Trans. on Robotics and Automation, RA-10, No. 4, pp. 517-529, 1994.

Distributed Ant Robotics

Inspired by ant communication we considered a group of robots, which communicate only by leaving traces, to cover an unmapped region (for, e.g., cleaning). We developed discrete (graph based) and continuous strategies that are able to guarantee coverage even if some of the robots stop functioning and when the region changes it connectivity. One example is:

Israel A. Wagner, Michael Lindenbaum, and Alfred M. Bruckstein.
Cooperative Covering by Ant-Robots Using Evaporating Traces,
For more, see the home page of Israel Wagner.
For a demo, you might want to look at our Simulators.
Hand-eye coordination

K. Kinoshita and M. Lindenbaum, Robotic Control with Partial Visual Information, ICCV, pp. 883-888, 1998