LIME-Based Explanations with Interpretable Inputs Based on Ceteris Paribus Profiles

Local explanations of machine learning models describe, how features contributed to a single prediction. This package implements an explanation method based on LIME (Local Interpretable Model-agnostic Explanations, see Tulio Ribeiro, Singh, Guestrin (2016) ) in which interpretable inputs are created based on local rather than global behaviour of each original feature.


News

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("localModel")

0.3.11 by Mateusz Staniak, 8 months ago


https://github.com/ModelOriented/localModel


Report a bug at https://github.com/ModelOriented/localModel/issues


Browse source code at https://github.com/cran/localModel


Authors: Mateusz Staniak [aut, cre] , Przemyslaw Biecek [aut] , Krystian Igras [ctb] , Alicja Gosiewska [ctb]


Documentation:   PDF Manual  


GPL license


Imports glmnet, ggplot2, partykit, ingredients

Suggests covr, knitr, rmarkdown, randomForest, DALEX, testthat


See at CRAN