Interpretable Machine Learning

Interpretability methods to analyze the behavior and predictions of any machine learning model. Implemented methods are: Feature importance described by Fisher et al. (2018) , accumulated local effects plots described by Apley (2018) , partial dependence plots described by Friedman (2001) < http://www.jstor.org/stable/2699986>, individual conditional expectation ('ice') plots described by Goldstein et al. (2013) , local models (variant of 'lime') described by Ribeiro et. al (2016) , the Shapley Value described by Strumbelj et. al (2014) , feature interactions described by Friedman et. al and tree surrogate models.


News

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("iml")

0.7.0 by Christoph Molnar, 9 days ago


https://github.com/christophM/iml


Report a bug at https://github.com/christophM/iml/issues


Browse source code at https://github.com/cran/iml


Authors: Christoph Molnar [aut, cre]


Documentation:   PDF Manual  


MIT + file LICENSE license


Imports R6, checkmate, ggplot2, partykit, glmnet, Metrics, data.table, foreach, yaImpute

Suggests randomForest, gower, testthat, rpart, MASS, caret, e1071, knitr, mlr, covr, rmarkdown, devtools, doParallel, ALEPlot, ranger


See at CRAN