UX of Explainable AI

Overview / Product Design @Unit21
Role / Designer (me!), Product manager, 2 ML engineers and Customer Success
Timeline / 1 month
Tools/ Figma
Status / Delivered & in use

Problem

Unit21 helps banks and fintechs monitor fraudulent transactions through an alerting and rule-based system. We launched Alert Scores - ML-based probabilistic scores that shows the likeliness of an alert being fraudulent. Since the end users were non-technical, there was some confusion around how the score came about, for example what factors were considered. Agents needed to be empowered to use the score correctly.

How do you help build trust in ML-based scores to help make decisions? What level of explainability is right for a non-technical audience?

Approach

I worked alongside the ML and product team to define a visualization experience to help with explainability.  The project started off with working alongside 5 design partners including banks and credit unions.

Within a span of few months of launching the visualization, we grew up to 47 design partners and paying customers.

Solution

Designing an AI solution requires following its own set up of design principles. We optimized to help build user trust through model explainability; I utilized SHAP visualization to explain how the ML model works.

#1

Introducing the right level of automation

Since this was the first time the Scores were released, it was important to ensure the alert scores are merely just another piece of data to review during investigation.

The scores should not be the only deciding factor when actioning, and they shouldn’t be used to create rules where it automatically actions on the alert.

#2

Accompany the score with in-context explanation

Yes, the customers will be trained on alert scores prior to using them. However, we ensured that customers had explanation they could reference while they were using the feature.

#3

Build trust through visually displaying model confidence

Users should be able to easily understand which factors / features led to producing the final score. Deciding on how to best display the model confidence is important.

In this case, visually displaying how each feature increased or decreased the score was crucial as it would help agents avoid over-trusting the score. I used SHAP visualization, which is a game theoretic approach to explain any ML model.

#4

Optimize for understanding

Using partial explanations, we can help the user understand what each feature means. It meant presenting the feature in a human-readable form with explanations that make sense according to the context.

Up Next

Designed an end-to-end Case Management platform to improve efficiency of the process

Systemizing Alnoor's Clinical Case Management Process

Linkedin