Arthur AI
Our top takeaways from NeurIPS 2020 on Responsible Machine Learning

Our top takeaways from NeurIPS 2020 on Responsible Machine Learning

January 8, 2021

Responsible AI and ML fairness are no longer niche topics in the AI research community, as evidenced by all of the amazing research presented in these areas at NeurIPS 2020. Here are our top takeaways from the conference.

An Overview of Counterfactual Explainability

December 10, 2020

Counterfactual explainability (CFE) is an emerging technique for local, example-based post-hoc explanations methods. Here's our summary of our key findings from a survey paper on CFE that we are presenting at NeurIPS 2020.

We’ve Just Raised Our Series A, and the Journey is Just Beginning

December 9, 2020

We're excited to announce our next phase of growth with a $15M Series A funding round led by Index Ventures, with participation from aCrew Capital, Homebrew, Work-Bench Ventures, AME Ventures, and Plexo Capital.

Product Update - Bias Monitoring v2.1

August 13, 2020

Arthur is thrilled to announce the official release of Bias Monitoring v2.1, previously in beta, a new way to visualize and programmatically combine sensitive attributes into subpopulations that can then be compared to one another in real time.

Recommendation Engines Need Fairness Too!

July 16, 2020

How does fairness apply when we think about model monitoring in recommender systems that use supervised and unsupervised learning to provide insight into user behavior? Most fairness exercises focus on binary classifiers, but what about other cases where it may not be as straightforward?

ArthurAI Fintech Innovation Lab: Class of 2020 Recap

July 1, 2020

The team at ArthurAI recently wrapped up 12 lightning weeks in the Fintech Innovation Lab, culminating with a Demo Day recording you can catch here. Enjoy!

Introducing Arthur Research Fellow: Sahil

June 26, 2020

We recently sat down with Arthur’s latest research fellow, Sahil Verma to talk about his work both inside and outside of the company.

How To: Build a Production-Ready Model Monitoring System for your Enterprise

June 15, 2020

Model monitoring is becoming a clear and essential category to any responsible AI strategy in the enterprise, but how would you do it if you wanted to build a solution on your own? Our VP of Machine Learning Keegan Hines lets us know…

AI During Black Swan Events

June 8, 2020

April and May look absolutely nothing like January or February, and with the situation still unfolding, June and July will be completely different as well. One of the less obvious impacts of this period of rapid change is to the behavior of AI models that play critical roles in our society.

How Explainable AI and Bias are Interconnected

April 24, 2020

Explainability doesn’t seek to slow down advancements in AI - it seeks to make that advancement fairer and safer for both everyday people and businesses implementing the AI. Explainability also goes hand-in-hand with decreasing bias in AI.

3 Reasons Model Monitoring is Vital for Strong AI Performance

April 16, 2020

Model monitoring is key for having continually high-performing artificial intelligence models in place. To help with understanding how model monitoring can be put into use, we’ve outlined 3 of the top ways data issues can cause AI performance loss below.

Fairness in Machine Learning is Tricky

April 8, 2020

Non-experts and experts alike have trouble even understanding popular definitions of fairness in machine learning — let alone agreeing on which definitions, if any, should be used in practice.

CB Insights AI 100

March 5, 2020

Today we are honored to announce that we've been included in the CB Insights AI 100!

Team Arthur at NeurIPS-19: A Retrospective

December 17, 2019

Arthur is fresh off the plane returning from NeurIPS, AI’s largest — and somewhat infamous — research conference. While there, Arthur announced its seed round and hosted a 50-person model monitoring meetup next to the convention center. Beyond that, the full NeurIPS was seven packed days of new advances and directional changes in the machine learning community. Here’s what the experience was like for us, written by Arthur Chief Scientist, John Dickerson.