Utilities for Scoring and Assessing Predictions

Combines a collection of metrics and proper scoring rules (Tilmann Gneiting & Adrian E Raftery (2007) ) with an easy to use wrapper that can be used to automatically evaluate predictions. Apart from proper scoring rules functions are provided to assess bias, sharpness and calibration (Sebastian Funk, Anton Camacho, Adam J. Kucharski, Rachel Lowe, Rosalind M. Eggo, W. John Edmunds (2019) ) of forecasts. Several types of predictions can be evaluated: probabilistic forecasts (generally predictive samples generated by Markov Chain Monte Carlo procedures), quantile forecasts or point forecasts. Observed values and predictions can be either continuous, integer, or binary. Users can either choose to apply these rules separately in a vector / matrix format that can be flexibly used within other packages, or they can choose to do an automatic evaluation of their forecasts. This is implemented with 'data.table' and provides a consistent and very efficient framework for evaluating various types of predictions.


News

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("scoringutils")

0.1.0 by Nikos Bosse, 5 months ago


https://github.com/epiforecasts/scoringutils


Report a bug at https://github.com/epiforecasts/scoringutils/issues


Browse source code at https://github.com/cran/scoringutils


Authors: Nikos Bosse [aut, cre] , Sam Abbott [aut] , Joel Hellewell [ctb] , Sophie Meakins [ctb] , James Munday [ctb] , Katharine Sherratt [ctb] , Sebastian Funk [aut]


Documentation:   PDF Manual  


Task views: Time Series Analysis


MIT + file LICENSE license


Imports data.table, goftest, graphics, scoringRules, stats

Suggests testthat, knitr, rmarkdown


See at CRAN