Events

Learnings from TTC Summit & Good Tech Fest

Learnings from TTC Summit & Good Tech Fest

Earlier this month, I attended the Trust, Transparency, and Control Labs (TTC Labs) Summit on Trustworthy AI experiences and Good Tech Fest. Each conference offered sessions that centered around one central theme: How do we make technology that is in service of humankind?

They wrestled with tensions in regulatory practices that make it hard for governments to enforce AI protections, discussed the nuance and meaning necessary for good explanations in AI systems, and dissected the privilege present in AI ethics today and how we can change it by engaging communities that will be harmed at the onset. The conference left me thinking that transparent AI systems, standardized international regulatory mechanisms, and impact assessments can help AI be good for everyone. Read on to see how I arrived at these learnings.

Tensions Around Regulatory Practices

During the Transparency & Explainability in AI Regulation panel, several tensions arose around the AI regulatory space. EU Laws like GDPR and the forthcoming AI Act are focused on citizen rights, whereas U.S. federal laws around data, such as HIPAA, are focused on sectoral regulation. This nuance in the intention of the laws makes it hard to come up with frameworks and guidance for organizations to follow, as more and more organizations operate internationally. And even if these frameworks provided good guidelines (as some do, such as the AI and data protection risk toolkit | ICO), creating tooling to automate and manage these regulations throughout an organization is still a challenging and cumbersome problem. I think my favorite part of the panel was an acknowledgement that until society writ large has an intuitive understanding of AI concepts, enforcing regulations is going to be extremely difficult.

Even something as simple as defining an AI system is quite challenging. According to one of the speakers during the AI & Society: Demonstrating Accountability panel, the AI Act actually provides some guidance on what counts as an AI system. Via this guidance, a manually coded decision tree, with rules hard coded, is not considered an AI system—but a trained decision tree, that learns the same exact decision rules based on a training dataset, does. This implies that the former does not need to abide by the laws that would be enacted by the AI Act. Although definitions will always contain loopholes, acknowledging them is a first step to refining definitions over time.

Going Past Explainable Models

Researchers from Google walked us through some of their explainability case studies, a tool they use to help them think through the nuances behind explainability in AI systems. What fascinated me most was that the case study we did together made us think past the conventional idea of explaining why a model had a certain prediction. We discussed the amount of information presented, the way the information was framed, and how the urgency or necessity of such information should prompt different types of responses from AI systems. These are fine-grained ideas to grapple with, and start to move into the territory of design or HCI work around explainability. One of the panelists on the Where Control Happens panel emphasized on multiple occasions that explanations need to be meaningful. It isn’t just that an explanation needs to happen, but it needs to be useful for a user, possibly giving them agency that they otherwise would not have had.

Tenets of AI Ethics

When trying to design ethical AI systems, we need to think about those who are involved in creating the system and those who are impacted by it. At both conferences, this idea was brought up repeatedly as something that needs to be addressed. Specifically, we need to focus on communities that do not have a voice in the creation of AI systems because they are usually the ones most susceptible to harm. During the Explainable to All: Designing AI Experiences panel, several of the panelists discussed how bringing those voices that are traditionally underrepresented, if represented at all, will allow us to at least mitigate some of the harm that AI systems may impose on a community. But this is much easier said than done. As brought up on the panel, ideas of fairness, transparency, and consent in AI systems is primarily a notion in the Global North. Even using frameworks like participatory design to address these ideas proves challenging because the notion of participatory design itself comes from the Global North. Thus, those communities that are impacted in the Global South may have no good way to give consent or may not have the cultural understanding to participate effectively in a participatory design framework. Addressing the cultural issues behind our notions first will help us start to actively include these communities.

This should also be considered when collecting data. As discussed at Good Tech Fest’s session on Missing Voices in Natural Language Processing, the majority of languages used in large language models comes from primarily privileged communities that have access to technology. Thus, we need to be conscious about our language data collection mechanisms and how we deploy and assess large language models. The presenter even cited one of our blog posts as a good guide to ethical natural language processing practices.

Takeaways and Resources

Overall, these conferences expanded my thinking on a lot of topics and reminded me of the nuance and complexity that fairness and explainability hold. Below are some takeaways coupled with some resources that can help you continue your own journey with these concepts.

  1. Transparency is key to solving the challenge of trustworthy AI. By explicitly showing how systems work, providing agency to individuals, and attempting to participate with communities being impacted, we can develop AI systems that will benefit humanity. Some of my favorite papers on transparency are documentation-based, such as Datasheets for Datasets or Model Cards.
  2. Creating standardized regulations for AI systems and having consistent auditing frameworks are also effective. Realizing these notions is easier said than done, but being able to enforce regulations, while also providing tools for organizations to be successful under this enforcement, will create more ethical uses of AI.
  3. Starting from impact assessments will help organizations realize who will be harmed most by AI systems. Once identified, one of the panelists on the Explainable to All: Designing AI Experiences panel expressed that inviting representatives from those communities and achieving a consensus with them will help address harms, while also gaining an effective proxy for consent.
  4. Lastly, part of my attendance at Good Tech Fest was as a DataKind volunteer. During DataKind’s session, we helped nonprofits start to think about their data scienceable problem. DataKind offers their playbook for how to design and scope out data sciencable problems ethically.

Conclusion

Fairness and explainability are extremely nuanced and important fields because, if done right, we can make the world a better place. It’s part of the reason why I am so excited to be part of the Arthur team. The tools we are building help organizations start to operationalize the latest research in fairness and explainability, allowing them to realize their aspirations to be ethical. Looking forward to continuing the journey!

Interested in learning more about what Arthur can do for you? Get in touch.