Sensitivity Analysis of Neural Networks

Analysis functions to quantify inputs importance in neural network models. Functions are available for calculating and plotting the inputs importance and obtaining the activation function of each neuron layer and its derivatives. The importance of a given input is defined as the distribution of the derivatives of the output with respect to that input in each training data point.


Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.2.3 by Jaime Pizarroso Gonzalo, 15 days ago

Report a bug at

Browse source code at

Authors: José Portela González [aut] , Antonio Muñoz San Roque [aut] , Jaime Pizarroso Gonzalo [aut, ctb, cre]

Documentation:   PDF Manual  

GPL (>= 2) license

Imports ggplot2, gridExtra, NeuralNetTools, reshape2, caret, fastDummies, stringr, Hmisc, ggforce, scales, ggnewscale, magrittr

Suggests h2o, neural, RSNNS, nnet, neuralnet, plotly, e1071

See at CRAN