Get the Insights of your Neural Network

Interpretability methods to analyze the behavior and individual predictions of modern neural networks. Implemented methods are: 'Connection Weights' described by Olden et al. (2004) , Layer-wise Relevance Propagation ('LRP') described by Bach et al. (2015) , Deep Learning Important Features ('DeepLIFT') described by Shrikumar et al. (2017) and gradient-based methods like 'SmoothGrad' described by Smilkov et al. (2017) , 'Gradient x Input' described by Baehrens et al. (2009) or 'Vanilla Gradient'.


Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.1.0 by Niklas Koenen, 13 days ago,

Report a bug at

Browse source code at

Authors: Niklas Koenen [aut, cre] , Raphael Baudeu [ctb]

Documentation:   PDF Manual  

MIT + file LICENSE license

Imports checkmate, ggplot2, R6, torch

Suggests keras, knitr, neuralnet, plotly, rmarkdown, tensorflow, testthat

See at CRAN