Pixel Club: Detecting Similar Actions across videos using a view and appearance video descriptor

Michal Yarom (​Weizmann Institute of Science)
Tuesday, 18.4.2017, 11:30
EE Meyer Building 1061

The ability to detect similar actions across videos can be very useful for real-world applications in many fields. In this talk I will describe the descriptor we developed, the "temporal-needle" descriptor. Our descriptor captures the dynamic behavior, while being invariant to viewpoint and appearance. Using the descriptor, we were able to detect the same behavior across videos in a variety of scenarios. I will describe how the descriptor is computed, and how it can be used to find good correspondences across videos. I will show examples of its usage for tasks such as temporal and spatial alignment, action detection and show its potential in unsupervised video clustering into categories.

* This work was done under the supervision of Prof. Michal Irani.

Short Bio:
Michal Yarom is a Machine Learning researcher at Microsoft AI & Research. She received an MSc degree in Computer Science from the Weizmann Institute of Science, under the supervision of Prof. Michal Irani. Her thesis focused on detecting similar actions across videos, and she also worked on developing a passive 3D display which is sensitive to light and viewpoint (with Haggai Maron and Prof. Anat Levin).

Back to the index of events