Interpretable Machine Learning

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) , partial dependence plots described by Friedman (2001) < http://www.jstor.org/stable/2699986>, individual conditional expectation ('ice') plots described by Goldstein et al. (2013) , local models (variant of 'lime') described by Ribeiro et. al (2016) , the Shapley Value described by Strumbelj et. al (2014) , feature interactions described by Friedman et. al and tree surrogate models.


News

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("iml")

0.5.1 by Christoph Molnar, a month ago


https://github.com/christophM/iml


Report a bug at https://github.com/christophM/iml/issues


Browse source code at https://github.com/cran/iml


Authors: Christoph Molnar [aut, cre]


Documentation:   PDF Manual  


MIT + file LICENSE license


Imports R6, checkmate, ggplot2, partykit, glmnet, Metrics, data.table

Suggests randomForest, gower, testthat, rpart, MASS, caret, e1071, lime, mlr, covr, knitr, rmarkdown


See at CRAN