Note: This blog post and code could not have been done without the fantastic research from our former research fellow Kweku Kwegyir-Aggrey and former machine learning researcher Jessica Dai. You can view their paper here.
Motivation
Many organizations utilize binary classifiers for a variety of reasons, such as helping loan providers decide who should get a loan, predicting whether or not something is spam, or providing evidence on whether or not something is fraudulent. These use cases require specific classification thresholds. Imagine an algorithm is predicting whether or not someone qualifies for a loan. One way to do this is to attribute a probability to a person, and if that probability is above a certain threshold (let’s say 0.5), then they can get a loan. If not, then they will be rejected.
What is the proper threshold to use in these scenarios? Taking spam detection as an example, the threshold set will determine how often an email is classified as spam. A threshold of 0.8 is less permissive than a threshold of 0.4. That is why many organizations have threshold ranges for their algorithms, which can complicate things.
Current bias mitigation techniques, such as the one we offer at Arthur, traditionally require you to change your classification threshold to meet some fairness definition. This change in threshold could be outside the range that your company allows, creating questions as to whether or not you can be fair. Further complicating these situations are models that are utilized in many downstream applications, where different threshold ranges (and possibly different fairness definitions) need to be utilized.
Downstream fairness solves this dilemma. It’s an algorithm that achieves various fairness definitions (equalized odds, equal opportunity, and demographic parity) in a threshold-agnostic way, meaning that a company won’t have to adjust their threshold. Instead it operates on a binary classifier’s output probabilities to achieve a fairness definition. And this is all done with minimal accuracy loss! For the remainder of this blog post, we’ll be digging deeper into this algorithm and how to use our new open source code.
Downstream Fairness
Saving the mathematical details for the Geometric Repair paper, we will discuss the essence of how Downstream Fairness works and provide code snippets from our open source package. First off, downstream fairness is a post-processing algorithm that operates on the training dataset (or some representative dataset) for the model we are trying to make fair. The data needs to contain some key information: the prediction probabilities for each data point, the classification label, and a column containing the sensitive attribute on which you are operating.
How the algorithm works is that it looks at the distribution of prediction probabilities per group of our original model and then computes a repair of each of those distributions for demographic parity. The reason this works for demographic parity is because the definition of demographic parity (equalizing selection rates for each group) only requires prediction probabilities and group information.
On the implementation side, this process produces an adjustment table. The adjustment table contains how much the prediction probabilities need to be adjusted to achieve demographic parity, for each group. Below is an example of how that table looks:

Luckily, this is all automated with our codebase! Here is an example of how to do this:
And, unlike some other bias mitigation approaches, downstream fairness is a pareto optimal algorithm. Meaning that it will achieve these fairness definitions with the minimum amount of accuracy loss.
Of course, there are some limitations. The dataset used to train downstream fairness must contain prediction probabilities for each class for each group, and there should be a good amount of examples for each class for each group. But if that is provided, the algorithm should work as expected.
Conclusion
We went through some of the algorithmic and implementation details of downstream fairness. If you want to explore more of the mathematical details, please go read the paper. Us at Arthur would love for you all to try out our work! Feel free to pip install our package and kick the tires a bit. As you find failure cases or think of new features, feel free to send your feedback to me at daniel.nissani@arthur.ai. Even better, please submit PRs or Issues on our open source GitHub repo. The GitHub repo provides a demo notebook, where you can try out all of our functionality we described in this post.