Trevor Darrell (UC Berkeley)
Tuesday, 28.2.2017, 14:30
Learning of layered or "deep" representations has provided significant advances in computer vision in recent years, but has traditionally been limited to fully supervised settings with very large amounts of training data. New results show that such methods can also excel when learning in sparse/weakly labeled settings across modalities and domains. I'll review state-of-the-art models for fully convolutional pixel-dense segmentation from weakly labeled input, and will discuss new methods for adapting deep recognition models to new domains with few or no target labels for categories of interest. As time permits, I'll present recent long-term recurrent network models can learn cross-modal description and explanation
Prof. Darrell is on the faculty of the CS Division of the EECS Department at UC Berkeley. He leads Berkeley’s DeepDrive Industrial Consortia, is co-Director of the Berkeley Artificial Intelligence Research (BAIR) lab, and is Faculty Director of PATH at UC Berkeley. Darrell’s group develops algorithms for large-scale perceptual learning, including object and activity recognition and detection, for a variety of applications including multimodal interaction with robots and mobile devices. His interests include computer vision, machine learning, natural language processing, and perception-based human computer interfaces. Prof. Darrell previously led the vision group at the International Computer Science Institute in Berkeley, and was on the faculty of the MIT EECS department from 1999-2008, where he directed the Vision Interface Group. He was a member of the research staff at Interval Research Corporation from 1996-1999, and received the S.M., and PhD. degrees from MIT in 1992 and 1996, respectively. He obtained the B.S.E. degree from the University of Pennsylvania in 1988.