A comprehensive AI fairness exploration framework.
📈 Fairness reports and stamps
⚖️ Multivalue multiattribute
🧪 Backtrack computations to measure building blocks
🖥️ ML compatible: numpy,pandas,torch,tensorflow,jax
FairBench strives to be compatible with the latest Python release, but compatibility delays of third-party ML libraries usually mean that only the language's previous release is tested and stable (currently 3.12).
import fairbench as fb
x, y, yhat = fb.bench.tabular.compas(test_size=0.5)
sensitive = fb.Dimensions(fb.categories @ x["sex"], fb.categories @ x["race"])
sensitive = sensitive.intersectional().strict()
report = fb.reports.pairwise(predictions=yhat, labels=y, sensitive=sensitive)
report.filter(fb.investigate.Stamps).show(env=fb.export.Html(horizontal=True), depth=1)
@article{krasanakis2024standardizing,
title={Towards Standardizing AI Bias Exploration},
author={Emmanouil Krasanakis and Symeon Papadopoulos},
year={2024},
eprint={2405.19022},
archivePrefix={arXiv},
primaryClass={cs.LG}
}
Maintainer: Emmanouil (Manios) Krasanakis ([email protected])
License: Apache 2.0
Contributors: Giannis Sarridis
This project includes modified code originally licensed under the MIT License:
- ReBias. (Copyright © 2020-present NAVER Corp)
Modifications © 2024 Emmanouil Krasanakis.
See fairbench/bench/vision/datasets/mnist/ for details.