Evaluation Metrics for Implicit-Feedback Recommender Systems

Calculates evaluation metrics for implicit-feedback recommender systems that are based on low-rank matrix factorization models, given the fitted model matrices and data, thus allowing to compare models from a variety of libraries. Metrics include [email protected] (precision-at-k, for top-K recommendations), [email protected] (recall at k), [email protected] (average precision at k), [email protected] (normalized discounted cumulative gain at k), [email protected] (from which the 'Hit Rate' is calculated), [email protected] (reciprocal rank at k, from which the 'MRR' or 'mean reciprocal rank' is calculated), ROC-AUC (area under the receiver-operating characteristic curve), and PR-AUC (area under the precision-recall curve). These are calculated on a per-user basis according to the ranking of items induced by the model, using efficient multi-threaded routines. Also provides functions for creating train-test splits for model fitting and evaluation.


Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.1.6 by David Cortes, 2 days ago


Report a bug at https://github.com/david-cortes/recometrics/issues

Browse source code at https://github.com/cran/recometrics

Authors: David Cortes

Documentation:   PDF Manual  

BSD_2_clause + file LICENSE license

Imports Rcpp, Matrix, MatrixExtra, float, RhpcBLASctl, methods

Suggests recommenderlab, cmfrec, data.table, knitr, rmarkdown, kableExtra, testthat

Linking to Rcpp, float

See at CRAN