Dr Elliot Crowley
Tue 13 Oct 2020, 13:00 - 14:00
Online Teams

If you have a question about this talk, please contact: Mehrdad Yaghoobi Vaighan (myvaigha)

Image for Deep Doesn’t Have to be Complex

It would be easy after a postdoc in making deep neural networks (DNNs) more efficient, to say that “I am interested in making DNNs more efficient”. This is true to an extent, but in 10 years I hope that we are still not working on compressing and accelerating neural networks; this should be done, and perhaps we will have discovered something better than neural networks!

So what am I interested in? I like finding simple solutions to machine learning problems. Just because the networks we utilise are complicated, doesn’t mean the way we use them has to be. Apart from being incredibly satisfying, simple solutions have the advantage that they are easy to understand and implement, and are therefore of far more use to practitioners than their esoteric counterparts. In this talk I will briefly introduce the problems of network compression, few-shot learning, and neural architecture search before presenting the simple solutions developed by some permutation of Crowley et al. to solve them.