Ryan Davies |
Tue 05 Feb 2019, 11:00 - 12:00 |
IF 4.31/4.33 |
If you have a question about this talk, please contact: Gareth Beedham (gbeedham)
Title: Explaining Deep Neural Networks with Deep Neural Networks
Abstract: Deep neural networks are black-box models, meaning it can be difficult to understand the reasons the model makes the decisions it makes. Recently, a number of methods have been proposed to generate explanations of the network's output in a way that can be understood by people. One popular method, LIME (Ribeiro et al. 2016), provides explanations in the form of a linear model. However, the process of generating the explanation can be time consuming, particularly if explanations for a large number of points are desired. I will talk about our current work, where we aim to use another deep neural network (a 'Metanetwork') to approximate a function that maps from instances in the dataset to explanations for the decision made by the model being explained. This would allow explanations for a large number of points to be generated using only a forward pass of the 'Metanetwork'.