Overview / Product Design @Unit21
Role / Designer (me!), Product manager, Engineering Manager, 4 Engineers and Customer Success
Timeline / 2 months
Tools/ Figma
Status / In Beta
Check Fraud is a rampant problem, with banks and credit unions having lost up to $24 billion up to 2023. We dabbled into fighting check fraud through improving a highly manual alert investigation process agents have to take, while also setting the foundation for an AI-driven investigation.
I discovered gaps in the check image comparison flow, championed an image comparison tool which led to efficiency improvement by 80%.
Banks and Credit Unions hire agents to monitor any alerted transactions, and conduct investigation where necessary. Currently, a single investigation process can take as long as 2+ hours, just due to the fact that information is scattered across multiple tools and it takes an unnecessary number of clicks to get to discover relevant data.
One of the biggest UX issue lies with the check comparison feature - when agents are comparing check images with previous images to find any signs of fraudulent activities.
Though there were other projects that sprouted off of the research phase, I will be drilling down into just the image comparison portion.
Each bank defines its own process guidelines for which pieces to investigate for a check transaction, for ex. reviewing transaction activity, the various entities involved and their relationship and comparing the check against past checks.
To better understand the investigation process and discover pain points, I conducted user research with 10 users across 5 various banks, their roles ranging from Fraud Specialists to Fraud Managers to VP of Fraud.
Findings: One of the biggest issue identified was agents having to jump between platforms to find necessary information.
To better understand what the agents day to day work looks like, I helped put some numbers to it.
According to Agents...
According to Fraud Managers...
As part of user research, I conducted user walkthroughs, competitive analysis, and ethnographic research. The following user requirements emerged as a result.
Comparing check images visually sounds like a tedious task and should seemingly be automated. I considered whether we could automate comparison and display any discrepancies on the alert itself. This could save at least 5-10mins of investigation time.
However, AI needs to be built with the context in mind. I went back to the Fraud Managers and discussed where automation lied in their investigation process. Biggest issues identified were:
Main takeaways were, there needs to be the ability to view and compare check images in Unit21. Once agents are comfortable with the software, automation can slowly be introduced in stages. Therefore I designed the feature keeping in mind that it can eventually be automated.
Designing a good check image comparison tool is an interaction design challenge. It must be discoverable and be integrated in their primary workflow to reduce the number of unnecessary clicks. I iterated on a few versions and discussed their pros and cons with the engineering and product team.
When agents are viewing the given check image, they have the option to compare against a list of pre-selected check images. That can include latest images along with ones that are marked as fraud.
Agents can use Transaction Analysis functionality on the main page to filter and drill down into transactions they are interested in. From there, they can select the corresponding check images and compare.
When agents are viewing the given check image, they can use a stepper widget to select from a list of transactions which check image they’d like to dig into. From there, they can compare the images.
I conducted usability testing with 7 users, ranging from power users who use Unit21 platform for their day job to agents that conduct alert investigations in other platforms. The two major concerns we discovered were:
Usability testing revealed that the initial assumption of choosing flexibility as the criteria at the expense of implementation time could lead to UX issues later. I therefore re-introduced option#1 and designed an end-to-end flow that
1) would work for novice users who just want to compare against a list of pre-selected images and
2) expert users who can pick their own images.
Additionally, I also updated the UI to reflect a side-by-side image comparison as compared to scrolling pattern.
Check image comparison was swiftly developed in close collaboration with the development team, as well as tested internally by the QA team and Solutions Engineering. We were able to work closely with the data integration team to simplify complex queries to load the most relevant check images. The project is now in now fully developed and ready to be rolled out to beta customers.
Designing a SaaS software for enterprises can be tricky. Sometimes the people you talk to are not the people who will eventually use the product. Through this project, I learnt the importance of prioritizing talking to agents over the management, whether that meant finding users through unconventional means such as linkedin reachouts. It is still imperative to balance user needs vs. business goals so I ensured to collaborate with both the management and the end users.