Abstract: Deep neural networks have recently shown impressive classification performance on a diverse set of visual benchmarks. When deployed in real-world environments, it is equally important that these classifiers satisfy robustness guarantees. In this talk, I will first highlight the vulnerability of state-of-the-art classifiers to simple perturbation regimes, such as adversarial and universal perturbations. Then, I will show the existence of fundamental connections between robustness and geometric properties of classifiers. In particular, I will show that the geometric analysis of the decision boundaries of deep neural networks provides key insights into their robustness. I will finally conclude with important open problems in this emerging field.

Bio: 

Alhussein is a research scientist at Google DeepMind. He obtained his PhD from the signal processing laboratory in EPFL in 2016, and his M.Sc. in Electrical Engineering from EPFL in 2012. He was awarded the IBM PhD Fellowship awards for the academic years 2013-2014 and 2015-2016. 

Research interests: He is broadly interested in challenging problems related to computer vision and machine learning. In his recent research, he has focused on analyzing the robustness and invariance of classifiers to transformations from an empirical and theoretical perspective. 

This list does not include any venues

Note that ex-directory venues are not shown.

This list does not include any other list

This list is not included in any other list

Note that ex-directory lists are not shown.