Cross-Validation for Model Selection

Cross-validate one or multiple regression and classification models and get relevant evaluation metrics in a tidy format. Validate the best model on a test set and compare it to a baseline evaluation. Alternatively, evaluate predictions from an external model. Currently supports regression and classification (binary and multiclass). Described in chp. 5 of Jeyaraman, B. P., Olsen, L. R., & Wambugu M. (2019, ISBN: 9781838550134).


Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


1.3.3 by Ludvig Renbo Olsen, 2 months ago

Report a bug at

Browse source code at

Authors: Ludvig Renbo Olsen [aut, cre] , Hugh Benjamin Zachariae [aut] , Indrajeet Patil [ctb] , @patilindrajeets)

Documentation:   PDF Manual  

MIT + file LICENSE license

Imports checkmate, data.table, dplyr, ggplot2, lifecycle, lme4, MuMIn, parameters, plyr, pROC, purrr, rearrr, recipes, rlang, stats, stringr, tibble, tidyr, utils

Suggests AUC, covr, e1071, furrr, ggimage, ggnewscale, groupdata2, knitr, nnet, randomForest, rmarkdown, rsvg, testthat, xpectr

See at CRAN