Interpretability methods to analyze the behavior and predictions of
any machine learning model.
Implemented methods are:
Feature importance described by Fisher et al. (2018)
Partialclass is deprecated and will be removed in future versions. You should use
FeatureEffectnow. Its usage is similar to
iceargument are now combined in the new
methodargument, where you can choose between 'ale', 'pdp', 'ice', 'pdp+ice'.
method='ale'). They are now the default instead of PDPs, because they are faster and unbiased.
Partialare now computed batch-wise in the background. This prevents this methods from overloading the memory. For that, the
Predictorhas a new init argument 'batch.size' which limits the number of rows send to the model for prediction for the methods
FeatureImpadditionally allow parallel computation on multiple cores. See
vignette("parallel", package = "iml")for how to use it.
Predictorcan be initialized with a
type = "prob"), which is more convenient than writing a custom
predict.fun. For caret classification models, the default is now to return the response, so make sure to initialize the
type = "prob"for fine-grained results.
n.repetitionsparameter which controls the number of repetitions of the feature shuffling.
.class.namecolumn in results to
object$run()does not return
selfany longer. This means using
object$set.feature()for example does not automatically print the object summary any longer.
LocalModelcan be set with
Limehas been renamed to