Edward Grefenstette
Fri 23 Oct 2015, 11:00 - 12:30
Informatics Forum (IF-4.31/4.33)

If you have a question about this talk, please contact: Diana Dalla Costa (ddallac)

Abstract:

Many problems in Natural Language Processing, from Machine Translation to Parsing, can be viewed as transduction tasks. Recently, sequence-to-sequence mapping approaches using recurrent networks and parallel corpora have shown themselves to be capable of learning fairly complex transductions without the need for heavy (or any) annotation or alignment data. Traditional linguistically-motivated features such as syntactic types and dependencies are entirely latent in such models, reducing the need for expert linguistic knowledge in designing new solutions in NLP. In this talk, I will discuss the strengths and weaknesses of such approaches, before presenting some ameliorations based on attention mechanisms and working memory enhancements to standard recurrent neural networks.

 

Bio:

Edward Grefenstette is a franco-american computer scientist, currently working as a senior research scientist at Google DeepMind. Following an undergraduate degree in Physics and Philosophy, and a (misguided?) attempt at starting a research career in philosophy of mathematics in St Andrews, he went to Oxford to try his hand at computer science. After trying to solve all natural language processing with (very) pure mathematics better suited for modelling quantum information flow, he turned his attention to the more fashionable (and arguably more tractable) field of deep learning. His recent work, in collaboration with Karl Moritz Hermann and Phil Blunsom, inter alia, has focussed on using the most general neural models possible to attack end-to-end natural language understanding problems such as machine reading or semantically-motivated transduction.