Abstract: Deep neural networks have recently shown impressive classification performance on a diverse set of visual benchmarks. When deployed in real-world environments, it is equally important that these classifiers satisfy robustness guarantees. In this talk, I will first highlight the vulnerability of state-of-the-art classifiers to simple perturbation regimes, such as adversarial and universal perturbations. Then, I will show the existence of fundamental connections between robustness and geometric properties of classifiers. In particular, I will show that the geometric analysis of the decision boundaries of deep neural networks provides key insights into their robustness. I will finally conclude with important open problems in this emerging field.

Bio: 

Alhussein is a research scientist at Google DeepMind. He obtained his PhD from the signal processing laboratory in EPFL in 2016, and his M.Sc. in Electrical Engineering from EPFL in 2012. He was awarded the IBM PhD Fellowship awards for the academic years 2013-2014 and 2015-2016. 

Research interests: He is broadly interested in challenging problems related to computer vision and machine learning. In his recent research, he has focused on analyzing the robustness and invariance of classifiers to transformations from an empirical and theoretical perspective. 

If you have a question about this list, please contact:

Each talk has an organiser. Please contact them in the first instance if you have a query about a particular talk. Only contact one of the people below if you have a question about the list, such as whether your talk or series could be added.