Machine Learning (ML) models are widely used and have various applications in classification
or regression. Models created with boosting, bagging, stacking or similar techniques are often
used due to their high performance, but such black-box models usually lack of interpretability.
DALEX package contains various explainers that help to understand the link between input variables and model output.
The single_variable() explainer extracts conditional response of a model as a function of a single selected variable.
It is a wrapper over packages 'pdp' (Greenwell 2017)
single_prediction()are now consistent with
0by default instead of
"Intercept". The user can also specify the
baselineand other arguments by passing them to
single_prediction(@kevinykuo, #39). WARNING: Change in the default value of
yhat.*functions help to handle additional parameters to different
HRTest. Target variable is a factor with three levels. Is used in examples for classification.
show_outliersparameter. Set it to anything >0 and observations with largest residuals will be presented in the plot. (#34)
variable_response()to better support of
single_variable() / variable_response()function uses
data.framewhen specified as
explain.default()should help when
dplyris loaded after
prediction_breakdown(). Old names are now deprecated but still working. (#12)
apartments- will be used in examples
variable_importance()allows work on full dataset if
plot_model_performance()uses ecdf or boxplots (depending on
single_variable()supports factor variables as well (with the use of
factorMergerpackage). Remember to use
type='factor'when playing with factors. (#10)
explain(). Old version has an argument
predict.function, now it's
predict_function. New name is more consistent with other arguments. (#7)