Wrapper of Python Library 'shap'

Provides SHAP explanations of machine learning models. In applied machine learning, there is a strong belief that we need to strike a balance between interpretability and accuracy. However, in field of the Interpretable Machine Learning, there are more and more new ideas for explaining black-box models. One of the best known method for local explanations is SHapley Additive exPlanations (SHAP) introduced by Lundberg, S., et al., (2016) The SHAP method is used to calculate influences of variables on the particular observation. This method is based on Shapley values, a technique used in game theory. The R package 'shapper' is a port of the Python library 'shap'.


News

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("shapper")

0.1.2 by Szymon Maksymiuk, 4 months ago


https://github.com/ModelOriented/shapper


Report a bug at https://github.com/ModelOriented/shapper/issues


Browse source code at https://github.com/cran/shapper


Authors: Szymon Maksymiuk [aut, cre] , Alicja Gosiewska [aut] , Przemyslaw Biecek [aut] , Mateusz Staniak [ctb] , Michal Burdukiewicz [ctb]


Documentation:   PDF Manual  


GPL license


Imports reticulate, ggplot2

Suggests covr, DALEX, knitr, randomForest, rpart, testthat, qpdf


See at CRAN