Interpretable Machine Learning

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) , partial dependence plots described by Friedman (2001) <>, individual conditional expectation ('ice') plots described by Goldstein et al. (2013) , local models (variant of 'lime') described by Ribeiro et. al (2016) , the Shapley Value described by Strumbelj et. al (2014) and tree surrogate models.


Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.2.1 by Christoph Molnar, 6 days ago

Browse source code at

Authors: Christoph Molnar [aut, cre]

Documentation:   PDF Manual  

MIT + file LICENSE license

Imports R6, checkmate, dplyr, tidyr, ggplot2, partykit, glmnet, Metrics, data.table

Suggests randomForest, gower, testthat, rpart, MASS, caret, e1071, lime, mlr

See at CRAN