Machine Learning Explained: Evidence on the Explainability and Fairness of Machine Learning Credit Models

Virtual Conference April 28, 2022

FinRegLab hosted a virtual conference on April 28, 2022, featuring research being conducted by FinRegLab and Professors Laura Blattner and Jann Spiess of the Stanford Graduate School of Business on the use of machine learning in credit underwriting, with a particular focus on their potential implications for explainability and fairness. 

Machine learning models are drawing increased attention in credit underwriting because of their potential for greater accuracy and, particularly when combined with new data sources, greater financial inclusion. But because the models can be more complicated to analyze and manage, model transparency has become a critical threshold question for both lenders and regulators. The research empirically evaluates the performance and capabilities of currently available tools designed to help lenders develop, monitor, and manage machine learning underwriting models.

Additionally, the conference featured specific panels with subject matter experts and academics on fairness and explainability. The panels explored how tools designed to help lenders explain and manage ML underwriting models can help foster responsible use of complex models to make decisions with high stakes for consumers, firms, and communities.

Welcoming Remarks

Speaker

Melissa Koide, CEO & Director, FinRegLab

Presentation of Research Q&A

The initial panel included a presentation summarizing the research team’s recent white paper, “Machine Learning Explainability & Fairness: Insights from Consumer Lending,”  which evaluates seven proprietary tools as well as several open-source diagnostic methods to understand how their outputs could potentially help lenders in generating individualized consumer disclosures and fair lending compliance. The panel also included discussion by respondents and questions by audience members.

Laura Blattner, Assistant Professor of Finance, Stanford Graduate School of Business

Patrick Hall, Principal Scientist, bnh.ai

John Morgan, Managing Vice President, Capital One

P-R Stark, Director of Machine Learning Research, FinRegLab

Learn More About the Paper Here

Panel 1 – Explainability

The complexity of models derived by AI/ML algorithms poses fundamental challenges for oversight: How can we gain sufficient insight into how a model makes predictions in order for model users and their regulators to enable oversight and governance? The panel discussed debates over the use of inherently interpretable underwriting models as compared to post hoc diagnostic tools. The panel also considered which explainability challenges and limitations will most likely be the focus for practitioners over the next decade.

Speakers

Molham Aref, CEO, Relational.ai

Krishnaram Kenthapadi, Chief Scientist, Fiddler

Laura Kornhauser, Co-founder and CEO, Stratyfy

Jann Spiess, Assistant Professor of Operations, Information & Technology, Stanford Graduate School of Business

Adam Wenchel, CEO, Arthur

Scott Zoldi, Chief Analytics Officer, FICO

Panel 2 – Fairness

Serious questions exist about the ability of lenders to deploy machine learning underwriting models that meet anti-discrimination and equity expectations. The panel discussed concerns that the use of machine learning underwriting models will increase fair lending risks. Panel members also discussed specific approaches to improving the fairness of underwriting models, such as defining a framework for making fairness-performance tradeoffs and prospects for adversarial debiasing.

Speakers

Michael Akinwumi, Chief Tech Equity Officer, National Fair Housing Alliance

Sri Satish Ambati, CEO and Co-Founder, H2O.ai

Jay Budzik, CTO, Zest AI

Nick Schmidt, CEO, SolasAI

P-R Stark, Director of Machine Learning Research, FinRegLab

Related Publications

Machine Learning Explainability & Fairness: Insights from Consumer Lending:
(April 2022)

Read More

The Use of Machine Learning for Credit Underwriting: Market & Data Science Context
(Sept. 2021)

Read More
Translate »