Skip to content

mever-team/FairBench

Repository files navigation

FairBench

build coverage Documentation Status Code style: black Contributor Covenant

A comprehensive AI fairness exploration framework.
📈 Fairness reports and stamps
⚖️ Multivalue multiattribute
🧪 Backtrack computations to measure building blocks
🖥️ ML compatible: numpy,pandas,torch,tensorflow,jax

FairBench strives to be compatible with the latest Python release, but compatibility delays of third-party ML libraries usually mean that only the language's previous release is tested and stable (currently 3.12).

Example

import fairbench as fb

x, y, yhat = fb.bench.tabular.compas(test_size=0.5)

sensitive = fb.Dimensions(fb.categories @ x["sex"], fb.categories @ x["race"])
sensitive = sensitive.intersectional().strict()
report = fb.reports.pairwise(predictions=yhat, labels=y, sensitive=sensitive)
report.filter(fb.investigate.Stamps).show(env=fb.export.Html(horizontal=True), depth=1)

example

Attributions

@article{krasanakis2024standardizing,
      title={Towards Standardizing AI Bias Exploration}, 
      author={Emmanouil Krasanakis and Symeon Papadopoulos},
      year={2024},
      eprint={2405.19022},
      archivePrefix={arXiv},
      primaryClass={cs.LG}
}

Maintainer: Emmanouil (Manios) Krasanakis ([email protected])
License: Apache 2.0
Contributors: Giannis Sarridis

This project includes modified code originally licensed under the MIT License: