Algorithmic Fairness Metrics

Offers calculation, visualization and comparison of algorithmic fairness metrics. Fair machine learning is an emerging topic with the overarching aim to critically assess whether ML algorithms reinforce existing social biases. Unfair algorithms can propagate such biases and produce predictions with a disparate impact on various sensitive groups of individuals (defined by sex, gender, ethnicity, religion, income, socioeconomic status, physical or mental disabilities). Fair algorithms possess the underlying foundation that these groups should be treated similarly or have similar prediction outcomes. The fairness R package offers the calculation and comparisons of commonly and less commonly used fairness metrics in population subgroups. These methods are described by Calders and Verwer (2010) , Chouldechova (2017) , Feldman et al. (2015) , Friedler et al. (2018) and Zafar et al. (2017) . The package also offers convenient visualizations to help understand fairness metrics.


News

Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.

install.packages("fairness")

1.2.2 by Nikita Kozodoi, 6 months ago


https://kozodoi.me/r/fairness/packages/2020/05/01/fairness-tutorial.html


Report a bug at https://github.com/kozodoi/fairness/issues


Browse source code at https://github.com/cran/fairness


Authors: Nikita Kozodoi [aut, cre] , Tibor V. Varga [aut]


Documentation:   PDF Manual  


MIT + file LICENSE license


Imports caret, devtools, e1071, ggplot2, pROC

Suggests testthat, knitr, rmarkdown


See at CRAN