Wrapper of Python Library 'shap'

Provides SHAP explanations of machine learning models. In applied machine learning, there is a strong belief that we need to strike a balance between interpretability and accuracy. However, in field of the Interpretable Machine Learning, there are more and more new ideas for explaining black-box models. One of the best known method for local explanations is SHapley Additive exPlanations (SHAP) introduced by Lundberg, S., et al., (2016) The SHAP method is used to calculate influences of variables on the particular observation. This method is based on Shapley values, a technique used in game theory. The R package 'shapper' is a port of the Python library 'shap'.


Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.1.3 by Szymon Maksymiuk, a year ago


Report a bug at https://github.com/ModelOriented/shapper/issues

Browse source code at https://github.com/cran/shapper

Authors: Szymon Maksymiuk [aut, cre] , Alicja Gosiewska [aut] , Przemyslaw Biecek [aut] , Mateusz Staniak [ctb] , Michal Burdukiewicz [ctb]

Documentation:   PDF Manual  

GPL license

Imports reticulate, DALEX, ggplot2

Suggests covr, knitr, randomForest, rpart, testthat, markdown, qpdf

See at CRAN