You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I noticed that there is a file fair-classification/utils.py to implement disparate impact fairness. How can we utilize this code to build fair logistic regression in case of multi-group fairness, for example when races (American White, Black, Asian,..) is the protected attribute ?
The text was updated successfully, but these errors were encountered:
I noticed that there is a file fair-classification/utils.py to implement disparate impact fairness. How can we utilize this code to build fair logistic regression in case of multi-group fairness, for example when races (American White, Black, Asian,..) is the protected attribute ?
The text was updated successfully, but these errors were encountered: