Fit Interpretable Models and Explain Blackbox Machine Learning

Machine Learning package for training interpretable models and explaining blackbox systems. Historically, the most intelligible models were not very accurate, and the most accurate models were not intelligible. Microsoft Research has developed an algorithm called the Explainable Boosting Machine (EBM) which has both high accuracy and intelligibility. EBM uses machine learning techniques like bagging and boosting to breathe new life into traditional GAMs (Generalized Additive Models). This makes them as accurate as random forests and gradient boosted trees, and also enhances their intelligibility and editability. Details on the EBM algorithm can be found in the paper by Rich Caruana, Yin Lou, Johannes Gehrke, Paul Koch, Marc Sturm, and Noemie Elhadad (2015, ).


News

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("interpret")

0.1.22 by Rich Caruana, 7 days ago


https://github.com/microsoft/interpret


Report a bug at https://github.com/microsoft/interpret/issues


Browse source code at https://github.com/cran/interpret


Authors: Samuel Jenkins [aut] , Harsha Nori [aut] , Paul Koch [aut] , Rich Caruana [aut, cre] , Microsoft Corporation [cph]


Documentation:   PDF Manual  


MIT + file LICENSE license


System requirements: C++11


See at CRAN