Cross-Validation for Model Selection

Cross-validate one or multiple regression models and get relevant evaluation metrics in a tidy format. Validate the best model on a test set and compare it to a baseline evaluation. Alternatively, evaluate predictions from an external model. Currently supports linear regression, logistic regression and (some functions only) multiclass classification. Described in chp. 5 of Jeyaraman, B. P., Olsen, L. R., & Wambugu M. (2019, ISBN: 9781838550134).


Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.2.0 by Ludvig Renbo Olsen, 11 days ago

Report a bug at

Browse source code at

Authors: Ludvig Renbo Olsen [aut, cre] , Benjamin Hugh Zachariae [aut]

Documentation:   PDF Manual  

MIT + file LICENSE license

Imports data.table, dplyr, plyr, tidyr, ggplot2, purrr, tibble, caret, pROC, stats, lme4, MuMIn, AICcmodavg, broom, stringr, mltools, rlang, utils

Suggests knitr, groupdata2, e1071, rmarkdown, testthat, AUC, furrr, ModelMetrics, covr, nnet

See at CRAN