Yoav Goldberg (Bar-Ilan University)
Thursday, 6.4.2017, 11:30
The premise of the talk is processing natural language using
machine learning techniques.
While deep learning methods in Natural Language Processing are
arguably overhyped, recurrent neural networks (RNNs), and in
particular LSTM networks, emerge as very capable learners for
sequential data. Thus, my group started using them everywhere. After
briefly explaining what they are and why they are cool, I will
describe some recent work in which we use LSTMs as a building block.
Depending on my mood (and considering audience requests via email
before the talk), I will discuss some of the following:
learning a shared representation in a multi-task setting; learning to
disambiguate English prepositions using multi-lingual data; learning
feature representations for syntactic parsing; representing trees as
vectors; learning to disambiguate coordinating conjunctions; learning
morphological inflections; and learning to detect hypernyms in a large
corpus. All of these achieve state of the art results. Other potential topics
include work in which we try to shed some light on what's being captured by
LSTM-based sentence representations, as well as the ability of LSTMs to
learn hierarchical structures.