Evaluation Metrics for Machine Learning

An implementation of evaluation metrics in R that are commonly used in supervised machine learning. It implements metrics for regression, time series, binary classification, classification, and information retrieval problems. It has zero dependencies and a consistent, simple interface for all functions.


Version 0.1.1 (2012-06-19)

  • initial release
  • 16 evaluation metrics implemented with test cases

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.1.4 by Michael Frasco, 3 years ago


Report a bug at https://github.com/mfrasco/Metrics/issues

Browse source code at https://github.com/cran/Metrics

Authors: Ben Hamner [aut, cph] , Michael Frasco [aut, cre] , Erin LeDell [ctb]

Documentation:   PDF Manual  

BSD_3_clause + file LICENSE license

Suggests testthat

Imported by ConsReg, MetaIntegrator, RSCAT, dblr, iml, immuneSIM, lilikoi, predtoolsTS, specmine, superml.

Depended on by PUPAIM, manymodelr.

Suggested by featurefinder, s2net, tfdatasets.

See at CRAN