Together, we can shape the future of model operations while optimizing ML models for accuracy, explainability, and fairness to ensure compliance in highly regulated industries.
From the lab to the boardroom, we partner with global data scientists, ML directors and AI Center of Excellence leadership to launch real-world solutions worldwide. As enterprises embark on their AI maturity journey, we share researcher insights, advance whiteboard ideas, empower best practices, benchmark industry metrics, and inspire thought leadership.
John is co-founder and Chief Scientist at Arthur, the AI performance monitoring company, as well as a tenured professor of Computer Science at the University of Maryland. His research centers on solving practical economic problems using techniques from computer science, stochastic optimization, and machine learning. He received his PhD in computer science from Carnegie Mellon University (SCS CSD PhD '16).
Keegan is the Vice President of Machine Learning at ArthurAI and an Adjunct Professor at Georgetown University. His PhD work was at the University of Texas in the lab of Rick Aldrich, with a focus on bringing powerful statistical and computational methods to bear on the study of protein biophysics. He is generally interested in how we can use machine learning in a reliable and trustworthy way.
Max is a researcher at Arthur focused on simplifying and explaining machine learning models. Previously, he received an M.S. in Data Science from Harvard University, where he concentrated on interpretability and graph-based models. He is particularly excited about recent advances in applying abstract algebra, topology, and category theory to neural network design.
Jessica is a first-year PhD student in Computer Science at UC Berkeley, coadvised by Nika Haghtalab and Ben Recht. She previously spent two years at Arthur in engineering, research, and miscellaneous other roles, and received an Sc.B. in Computer Science from Brown University.
Teresa is a researcher at Arthur interested in transparency and social impact of algorithmic systems from a human-centered lens. She is interested in use-case evaluations of tools for AI transparency and context-based mechanisms for accountability. Previously, she worked on XAI and HCI projects while completing her M.S. in Data Science at Harvard University.
Valentine is a researcher at Arthur and is currently interested in data centric approaches to improve model performance as well as algorithmic and design approaches to make AI broadly usable. She comes from a Data Science background. She recently completed a Computer Science masters at Columbia University and holds an undergraduate degree in Physics from UPenn.
Daniel is a researcher at Arthur interested in the ethical design and implementation of machine learning systems. Previously, he worked on synthetic data generation, specifically around unstructured text, at Gretel. He received a dual masters from Cornell Tech in Information Systems and Applied Information Sciences and a B.S. in Mathematics and Secondary Education from Northwestern University.
Avi is a research fellow at Arthur and a fifth-year PhD student in the Applied Math and Scientific Computation program at the University of Maryland. His work at Arthur focuses on explainability tools for neural networks. At the University of Maryland, he is advised by Tom Goldstein on his work in deep learning. His general interests range from security to generalization and interpretability and he is trying to expand our understanding of when and why neural networks work.
Arthur offers enterprise-grade monitoring of models. Some aspects of monitoring are well understood, industry standard, and “from the book.” Yet, much of what we do—scalable very-high-dimensional drift detection, understanding the context in which fair machine learning should be offered (if at all), explainability for novel model types and input data types, understanding what robustness means, interaction with existing or future legal frameworks, and so on—necessitates deep interaction with the academic and policy communities. Toward that end, since our inception, our Research Fellows program has recruited and curated relationships with top AI, ML, policy, and legal junior researchers, who spend a summer or semester with Arthur building toward a joint goal of public dissemination of a research result. If you are a strong junior researcher interested in shaping the trustworthy and performant AI space, get in touch at firstname.lastname@example.org.
Michelle conducts research on interdisciplinary theory on AI ethics and practical tools for fairness, and hopes to better understand how one might inform the other. In addition to her time at Arthur, she has enjoyed doing research under various organizations including Stanford NLP Group, the ACLU, and Stanford ML Group and teaching/designing curricula for CS classes at Stanford.
Naveen is an undergraduate at UC Berkeley. His research interests lie broadly at the intersection of theoretical computer science, machine learning, and economics. In particular, he's excited about applications of learning to mechanism design and new economic paradigms for data exchange. Naveen has worked on projects with applications in kidney exchange, ecommerce, matching theory, theoretical statistics, fairness, and machine learning operations. He has collaborated with researchers at the University of Maryland, UC Berkeley, and Harvard University.
Lizzie Kumar is a Ph.D. candidate in Computer Science at Brown University. Her research analyzes computational and regulatory strategies for evaluating machine learning models from an interdisciplinary perspective. Previously, she developed actuarial risk models on the Data Science team at MassMutual. Lizzie holds an M.S. in Computer Science from the University of Massachusetts at Amherst and a B.A. in Mathematics from Scripps College.
Kweku is broadly interested in machine learning and statistics with a specific focus on the design of algorithms that audit machine learning models for fairness and robustness. He is interested in questions which rigorously examine and critique data-driven technological solutionism. He is a PhD candidate in the Brown University Department of Computer Science and received his bachelor’s degree in Computer Science & Mathematics at the University of Maryland.
Sahil is a PhD student in the Department of Computer Science and Engineering at the University of Washington, Seattle. He is interested in answering questions related to explainability and fairness in ML models. In the past, Sahil has worked on developing novel techniques to generate counterfactual explanations for ML classifiers and also spearheaded a team that wrote a large and comprehensive survey paper on counterfactual explanations. Currently, Sahil is interested in problems of explainability in recommender systems and fairness in LLMs.
From academic publishing to real world practice, learn how Humana and Arthur worked together to transform the third largest health insurance provider in the nation.