Explaining the Decisions of Deep Neural Networks and Beyond Grégoire Montavon, Research Associate TU Berlin, Germany
watch the video here: https://vimeo.com/447129701
Abstract:Deep neural networks have shown capable of converting large amounts of data into highly predictive nonlinear models. These models are however often perceived as black-boxes. In recent years, important efforts have been made to provide human-interpretable explanations of their predictions, in particular, determining which input features these models use to support their decisions. In this talk, we present recent work to extract explanations from complex nonlinear models such as deep neural networks used for image classification. Several recent directions are then presented to extend the approach beyond single predictions, towards explaining whole datasets or unsupervised learning models.
Grégoire Montavon received a Masters degree in Communication Systems from École Polytechnique Fédérale de Lausanne in 2009 and a Ph.D. degree in Machine Learning from the Technische Universität Berlin in 2013. He is currently a Research Associate in the Machine Learning Group at TU Berlin. His research interests include interpretable machine learning and deep neural networks.