Tuesday, 10.11.2015, 11:30
Over the last few years we have been developing techniques for analyzing small motions in videos. Our techniques are based on an Eulerian approach for motion processing, which does not explicitly compute motion vectors (as traditionally done in computer vision), but rather analyzes intensity changes in image pixels over time using a combination of spatial and temporal filtering. This lets us turn light measurements efficiently into motion measurements and use ordinary cameras for a variety of motion sensing applications involving tiny motions.
In this talk I will describe our processing algorithms and will show how we use them for both visualization and analysis of small (often imperceptible) motions in videos. I will focus on two of our more recent works on passive sound recovery (SIGGRAPH 2014) and material property estimation (CVPR 2015), where we use a camera to extract minute sound-induced vibrations in objects' surfaces, to passively recover sounds near objects (just from high-speed, silent video), and to infer relative properties of objects like their area weight and elasticity.
The talk will cover works done in collaboration with: Bill Freeman, Fredo Durand, Neal Wadhwa, Abe Davis, Katie Bouman, Justin Chen and Gautham Mysore.