Arthur AI
Back to Blog

Introducing Arthur Research Fellow: Sahil

Introducing Arthur Research Fellow: Sahil

June 26, 2020

Can you tell us a little bit about your background?

I’m a first year PhD student at the Paul G. Allen School of Science and Engineering at UW, Seattle — I know that’s a long name! Before joining my grad school program I worked on a couple of things as an undergrad and one of the areas was fairness in machine learning. I published my first paper in this area in 2018 (“Fairness Definitions Explained”) expounding on the definitions of fairness for the community. This paper required me to perform an extensive literature survey which got me interested in this area.

The paper caught momentum and it has garnered more than 100 citations in 2 years. I always kept thinking about the problem, given that we have detected bias in the systems, how can we mitigate it? So when I joined my PhD program I proposed my first project which was about removing bias from data to make the overall machine learning system less biased and we submitted that to NeurIPS 3 weeks ago.

In the meantime I kept reading blogs and that’s where I learned about Arthur back in January, before pandemic mode. Everything got paralyzed so quickly, I’m so glad we were able to move forward so fast. I am grateful that the moment between when I sent my email to Arthur and when I got the offer was very short!

What are you going to work on at Arthur?

My work at Arthur is in explainability and not fairness. Many people in this area think of these two things as being connected. I’m not sure I agree, because an extremely fair model can be explainable or not, and vice versa. I’ll be working on many facets of explainability while I’m here. One of these facets is called “counterfactual explanations”. Given a datapoint and a trained ML model, you get a prediction, and you want to find a new output that’s close to the original prediction in such a way that the model predicts a different class for this generated datapoint.

This may sound similar to adversarial examples, but the main difference between these two is the actionability and the feasibility of the counterfactual datapoint. So for instance, if a person goes to a bank and their loan request gets rejected, a counterfactual explanation might suggest that you need a higher educational degree. But, if you as a real person get an advanced degree, this will also have an impact on your age, since it takes a few years to finish. An adversarial example won’t take these real world constraints into account and might suggest “decrease your age by 17 months”, which is, of course, impossible. Counterfactuals are grounded in real world relations and real world feasibility. They can also take personal preferences of an individual into account. For instance, some people might be perfectly happy to get an advanced degree, while someone else might have an easier time increasing their earnings by a certain amount.

Why are counterfactual explanations important in ML today?

Because it’s actionable! Most people, when they talk about explainability are thinking about how and why a ML system makes that prediction? Why was this dog classified as a wolf, for example? Counterfactuals may not say why or how, but it helps the person who is receiving a treatment via a ML prediction to change their behavior (captured as features by a ML model) in such a way that they can receive the favorable outcome. In my example from earlier, that would be by receiving a loan. This method doesn’t care about model internals which can be very complex. Many believe that explainability and accuracy have a tradeoff, meaning that in the more complex the model is, the more accurate it will be like we often see in deep neural networks. On the other hand, counterfactuals do not require any such tradeoff. You can have super complex models and still get actionable counterfactual explanations.

What is the most exciting field in ML right now?

Explainability and fairness! Of course!!

What inspired you to pursue this field of research?

When I started in this field, I wasn’t very aware of the impacts of ML in the real world until I started reading the classic articles, like ProPublica’s expose of the incarceration rates of different races. And that got me kind of infuriated, like why should a person who stole a bicycle get 5 times the amount of time in jail compared to someone who committed a much more heinous crime, just because of a difference in their races? It sounded very unfair to me. I could be in that position someday. We need to correct this, because it can happen to any of us. Just because it’s not happening to me right now doesn’t mean I can turn a blind eye to the issue. Along with this, the challenging aspect of the problem is exciting as well, and it helps the model in generalization.

Where do you think your work can have the most influence?

Two areas, the first would be criminal justice and the second is credit card/loan applications. Most of my work involves tabular data rather than high dimensional work that involves audio or images. I am also working with a professor who works on fairness in recommender systems. How to make your recommendations fair is a much more intricate problem compared to classification, but equally as important, given that we get most of our information from recommendations and search performed on platforms like Google, Spotify, and Amazon.

For example, if you search for “absentee ballots” on a search engine, you shouldn’t only get what one side of the discussion thinks or says. You need to have both sides in your results, but the search engines are stuck in a pattern of sensationalizing things. These sensational things get more clicks, and as a result are served to more people, and they get stuck in a vicious cycle of higher ranking on the search engine. You can tie a lot of our societal trouble right now to these engines controlling what people see and read online. Fairness would help this problem because it will help people get a balanced overview from more than just one type of content.

Why did you choose to work with Arthur?

I learned about Arthur from a blog post and thought this would be an interesting place to work in the summer. When I met Keegan, I learned that this was a small team and I wanted to be able to interact with the whole team, and form more personal relationships compared to professional ones. I wanted to learn about how to work in a startup environment because I also think that one day I might be interested in opening up a startup, and this would give me a chance to work in a company in a less structured, more freeform way. I also wanted to have a real impact. For instance, just yesterday I proposed an idea in a meeting and today, it’s on Arthur’s platform! Having a direct influence on the company’s product, that’s very exciting to me.