Or Litany (NVIDIA, Toronto AI lab)
Thursday, 14.1.2021, 17:00
Meeting ID: 958 1720 7725
In this talk i'll be covering several works in the topic of 3D deep learning on pointclouds for scene understanding tasks. First, I'll describe VoteNet (ICCV 2019, best paper nomination): a method for object detection from 3D pointclouds input, inspired by the classical generalized Hough voting technique. I'll then explain how we integrated image information into the voting scheme to further boost 3D detection (ImVoteNet, CVPR 2020). In the second part of my talk I'll describe recent studies focusing on reducing supervision for 3D scene understanding tasks, including PointContrast -- a self-supervised representation learning framework for 3D pointclods (ECCV 2020). Our findings in PointContrast are extremely encouraging: using a unified triplet of architecture, source dataset, and contrastive loss for pre-training, we achieve improvement over recent best results in segmentation and detection across 6 different benchmarks for indoor and outdoor, real and synthetic datasets -- demonstrating that the learned representation can generalize across domains.
Or Litany (PhD 2018, Tel-Aviv University) is a Research Scientist at NVIDIA's Toronto AI lab, led by Prof. Sanja Fidler. Before that he was a postdoctoral fellow at Stanford University, hosted by Prof. Leonidas Guibas, a postdoc at Facebook AI Research, hosted by Prof. Jitendra Malik, and a postdoc at the Technion, hosted by Prof. Alex Bronstein. Or received his B.Sc. in Physics and Mathematics from the Hebrew University under the auspices of “Talpiot”. He holds M.Sc. (Magna Cum Laude) and Ph.D. degrees in Electrical Engineering from Tel-Aviv University (advised by Prof. Alex Bronstein). During his PhD, Or has held visiting scholar appointments at TU Munich and Duke universities and was a research intern at Microsoft Research, Intel, and Google Research. Or's main interests include 3D deep learning, and methods for reducing supervision. He is the recipient of several awards including a best paper award at SGP'16, best paper nomination at ICCV'19 and best paper award at ICML'20.