Get the Insights of your Neural Network

Interpretability methods to analyze the behavior and individual predictions of modern neural networks. Implemented methods are: 'Connection Weights' described by Olden et al. (2004) , Layer-wise Relevance Propagation ('LRP') described by Bach et al. (2015) , Deep Learning Important Features ('DeepLIFT') described by Shrikumar et al. (2017) and gradient-based methods like 'SmoothGrad' described by Smilkov et al. (2017) , 'Gradient x Input' described by Baehrens et al. (2009) or 'Vanilla Gradient'.


News

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("innsight")

0.1.0 by Niklas Koenen, 13 days ago


https://bips-hb.github.io/innsight/, https://github.com/bips-hb/innsight/


Report a bug at https://github.com/bips-hb/innsight/issues/


Browse source code at https://github.com/cran/innsight


Authors: Niklas Koenen [aut, cre] , Raphael Baudeu [ctb]


Documentation:   PDF Manual  


MIT + file LICENSE license


Imports checkmate, ggplot2, R6, torch

Suggests keras, knitr, neuralnet, plotly, rmarkdown, tensorflow, testthat


See at CRAN