Tuesday, 21.3.2017, 11:30
Recurrent Neural Networks (RNNs) have had considerable success in classifying and predicting sequences. We demonstrate that RNNs can be effectively used in order to encode sequences and provide effective representations. The methodology we use is based on Fisher Vectors, where the RNNs are the generative probabilistic models and the partial derivatives are computed using backpropagation. State of the art results are obtained in two central but distant tasks, which both rely on sequences: video action recognition and image annotation. We also show a surprising transfer learning result from the task of image annotation to the task of video action recognition.
Guy Lev is a research staff member at the Machine Learning Technologies group at IBM Haifa Research Lab, where he works on Machine Leaning and Deep Learning approaches for NLP tasks. Prior to joining IBM, Guy completed his M.Sc. in computer science at Tel Aviv University, under the supervision of Prof. Lior Wolf, where he explored methods for semantic representations of sequences, such as sentences or videos, and connecting between Computer Vision and NLP. Before his M.Sc. studies, Guy was a software and algorithm developer at Broadcom.