Help Detect and Mitigate Bias in Machine Learning Models

The 'AI Fairness 360' <> toolkit is an open-source library to help detect and mitigate bias in machine learning models. The AI Fairness 360 R package includes a comprehensive set of metrics for datasets and models to test for biases, explanations for these metrics, and algorithms to mitigate bias in datasets and models.


Reference manual

It appears you don't have a PDF plugin for this browser. You can click here to download the reference manual.


0.1.0 by Saishruthi Swaminathan, a year ago

Report a bug at

Browse source code at

Authors: Gabriela de Queiroz [aut] , Stacey Ronaghan [aut] , Saishruthi Swaminathan [aut, cre]

Documentation:   PDF Manual  

Apache License (>= 2.0) license

Imports reticulate, rstudioapi

Suggests testthat

See at CRAN