Wrapper of Python Library 'shap'

Provides SHAP explanations of machine learning models. In applied machine learning, there is a strong belief that we need to strike a balance between interpretability and accuracy. However, in field of the Interpretable Machine Learning, there are more and more new ideas for explaining black-box models. One of the best known method for local explanations is SHapley Additive exPlanations (SHAP) introduced by Lundberg, S., et al., (2016) The SHAP method is used to calculate influences of variables on the particular observation. This method is based on Shapley values, a technique used in game theory. The R package 'shapper' is a port of the Python library 'shap'.


Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.1.0 by Alicja Gosiewska, 2 months ago


Report a bug at https://github.com/ModelOriented/shapper/issues

Browse source code at https://github.com/cran/shapper

Authors: Alicja Gosiewska [aut, cre] , Przemyslaw Biecek [aut] , Michal Burdukiewicz [ctb]

Documentation:   PDF Manual  

GPL license

Imports reticulate, ggplot2

Suggests covr, DALEX, knitr, randomForest, rpart, testthat, titanic

See at CRAN