ML Research

How We Are Modeling Our Human Values in Technology Is Inherently Flawed

How We Are Modeling Our Human Values in Technology Is Inherently Flawed

Thank you to the original authors of Tensions Between the Proxies of Human Values in AI: Teresa Datta, Daniel Nissani, Max Cembalest, Akash Khanna, Haley Massa, and John Dickerson.

Intro

When creating automated machine learning systems, our belief is that these algorithms should not perpetuate and amplify harms. We want these algorithms to be fair to everyone, regardless of their gender, race, ethnicity, sexuality, etc. Our information should only be used and distributed with our consent. Inevitably, these algorithms will not work as expected, and we deserve an explanation as to why they are not working as they should.

We can encapsulate these human values as pillars of privacy, fairness, and explainability. Over the past generation, a lot of work has focused on embedding these values into our technologies. However, as we try to proxy these pillars with technical definitions and algorithmic designs, tensions keep getting discovered within the pillars, between the pillars, and with the real world contexts in which the pillars’ proxies are deployed. So we must ask: Why do we continue to face limiting tensions?

Tensions Within Pillars

It is well known in the AI community that popular definitions of fairness, such as demographic parity, equalized odds, and calibration, cannot be implemented in the same machine learning model [1]. Moreover, even when implemented correctly, these fairness definitions may cause more harm than good over time, given the feedback loop an algorithm might create [2].

Privacy discussions on the other hand are largely dominated by the notion of differential privacy [3][4], a probabilistic guarantee that one cannot realize that a data point has been removed or replaced in a dataset based on some mechanism. However, differential privacy has been shown to work poorly on outliers [5][6] or on models that have overfit to their training data [7].

Explainability has come into question now that black-box and hard-to-interpret models have gained popularity. However, the methods to produce explanations are generally just local approximations of models [8]. Some critics of the field say it is akin to reading tea leaves [9].

Tensions Between Pillars

A summary of the tensions identified within and between popular value proxies. Incorporating any one of these pillars is a challenge, and incorporating multiple requires handling competing priorities.

Much like how popular notions of fairness cannot be implemented in the same model [1], differential privacy and any popular fairness definition cannot be imposed on the same model [10]. In other words, the impossibility of having different notions of fairness in machine learning extends to having any notion of fairness in tandem with differential privacy.

The idea of models being more transparent may be in direct competition with the idea that our models are more private. Research has shown that popular explanation techniques, and even new ones such as counterfactual explanations, can make models susceptible to membership inference attacks [11].

Explainability should be a useful tool for identifying unfairness, but is not, and can at times hide the unfairness of a model [12]. Moreover, the converse can happen as well, where explanations of a model can actually amplify unfairness against certain subpopulations [13].

Tensions in the Real World: A Call for Context-Aware Machine Learning

The 2020 Census used differential privacy for the first time to meet the Privacy Impact Assessment’s requirements. By doing so, reported population counts fluctuated, which protected the privacy of individuals, but potentially lessened the funding small, rural populations would receive from the federal government. As an example, Native American reservations of less than 5,000 people saw decreases in population numbers by 34% on average. This type of error (which is an inherent feature in differential privacy) could result in the loss of funding for a road to a nearby town or a new school [14].

In 2021, the Markup found that people of color were denied loans 40-80% more often than white counterparts with similar financial profiles [15]. However, they were heavily criticized by “[t]he American Bankers Association, The Mortgage Bankers Association, The Community Home Lenders Association, and The Credit Union National Association ...saying the public data is not complete enough to draw conclusions, but did not point to any flaws in our computations.” The “incompleteness” of the data is because the Home Mortgage Disclosure Act requires “debts as a percentage of income, how much of the property’s assessed worth the person is asking to borrow,” but not “the applicant’s credit score” because of fears of re-identification attacks.

Both of these examples showcase how utilizing formulations of our human values without acknowledging the context can lead to dire consequences. In the case of the Census, although differential privacy offers a guarantee of a certain kind of privacy, it adversely affects certain populations that need privacy and more equitable funding. As a technical notion, differential privacy has no way of knowing the context it is being implemented in. And without any critical structures in place, it can run awry, causing consequences that we outlined above. Similarly, although not necessarily in an algorithmic sense, the notion of auditing the mortgage system requires transparency about the data being used to give out mortgages. Thus, decisions need to be made with contextual knowledge, so that specific entities can have access to the required information.

In bioethics and related ethical fields that are more mature than responsible AI, context is incredibly important. Doctors have access to different information depending on their physical context, such as whether they are in a hospital or in their car on the way to the hospital. The ethical concerns around the collection of biometric information are affected by the specific device being used. Context informs how information flows, what information is collected and used, and why certain decisions are made. If we hope to have more ethical machine learning systems, the incorporation of context could be a viable avenue [16].

How Can We Address These Issues?

In order to alleviate the issues we’ve described, we believe that the whole system should be considered when designing machine learning solutions. The techniques (and laws) described primarily deal with the model: differential privacy inhibits extraction attacks on models, fairness definitions constrain models to output more fair predictions, and explainability techniques are used to explain model outputs. Rather than just designing solutions for the models, we can look at the entire system the model is deployed into and determine the values most appropriate to embed and consider their associated consequences.

Although we don’t want to prescribe any solutions to these problems, because this is a nascent research area, there are some theories that we can look to for inspiration. Contextual integrity for privacy [17] is a way to encode the context of a situation, allowing us to understand the privacy requirements of technology. Substantive algorithmic fairness [18] asks us to analyze the structural inequalities present, identify the reforms that could mitigate such inequalities, and consider whether an algorithmic intervention could achieve such a mitigation. For explainability, designing around the needs of a situation and stakeholder’s understanding of transparency can lead to better techniques [19].

As we said above, this is not a solved problem. If anything, the idea of incorporating context in automated machine learning systems is itself very new. Thus, we want to leave the reader with questions we are interested in researching:

  1. How should information be collected by a contextual system? Collection of data is a hot button issue, so we have to collect data with intent. Leaning on Contextual Integrity for Privacy, we can get inspiration for how we can define context as a set of parameters to collect data.
  2. What types of tools need to be developed? Building out frameworks, evaluation suites, and more will be helpful, but we should consider what we need to build to make these systems effective and ethical.
  3. How should machine learning systems respond to context? Splitting this up, we should consider what triggers a response to context and how the user should feel the response to context.
  4. What aspects of ethical responsibility does each stakeholder carry? The creator of the technology, the person implementing the technology, and the person making decisions with this technology each have a different ethical role to play.
  5. How can we design inclusively? We can lean on participatory design principles to help us build these systems for everyone impacted.

Conclusion

Technology is inherently value-laden and political [20][21]. It can distribute information in specific ways, thus influencing how we make decisions. Moreover, although technology has the potential to help those in the most need, those who need the most are regularly ignored in the process of designing popular technologies.

Arthur’s research team believes that context-aware systems, those able to incorporate knowledge about the specific domain that a machine learning model is situated in, are a potential path to solve some of the issues above, as well as evaluate the consequences of such a system before deployment. Context awareness is a hard problem because it most likely will involve collecting new information about a specific deployment at some point in the model building or productionalization process.

Citation Links

Below is a list of links to the citations in this blog post. Note that our paper goes much deeper and has a more proper citation tree.

  1. https://arxiv.org/abs/1609.05807
  2. https://arxiv.org/abs/1803.04383
  3. https://people.csail.mit.edu/asmith/PS/sensitivity-tcc-final.pdf
  4. https://www.iacr.org/archive/eurocrypt2006/40040493/40040493.pdf
  5. https://arxiv.org/abs/1507.06763
  6. https://arxiv.org/abs/1910.13427
  7. https://arxiv.org/abs/1709.01604
  8. https://arxiv.org/abs/2206.01254
  9. https://docs.google.com/presentation/d/1bPUE2eD3NIYHYLm_D9njaVWgEgYXGpoEOjcnSaV7ccs/edit#slide=id.p
  10. https://crcs.seas.harvard.edu/files/crcs/files/ai4sg-21_paper_23.pdf
  11. https://arxiv.org/abs/1907.00164
  12. https://arxiv.org/abs/2205.03295
  13. https://arxiv.org/pdf/2106.13346.pdf
  14. https://www.nytimes.com/interactive/2020/02/06/opinion/census-algorithm-privacy.html
  15. https://themarkup.org/denied/2021/08/25/the-secret-bias-hidden-in-mortgage-approval-algorithms
  16. https://www.nature.com/articles/s41746-018-0075-8
  17. https://scholarlypublishingcollective.org/psup/information-policy/article/doi/10.5325/jinfopoli.1.2011.0149/314319/Privacy-in-Context-Technology-Policy-and-the
  18. https://arxiv.org/abs/2107.04642
  19. https://arxiv.org/abs/2101.09824
  20. https://web.cs.ucdavis.edu/~rogaway/papers/moral-fn.pdf
  21. https://arxiv.org/abs/1811.03435