The rapid advancement of artificial intelligence, particularly generative AI tools like ChatGPT, is ushering in a new era of financial fraud. Malicious actors are leveraging these technologies to craft more convincing and sophisticated scams, putting consumers and institutions at greater risk than ever.
But it's not just phishing. Fraudsters are exploiting AI across multiple fronts:
Large FIs and Vendors Fight Back with AI
Financial institutions aren't sitting idle. Over the past year, incumbents like MasterCard, Jack Henry, FIS, PSCU, and more made considerable investments in AI and partnered with or acquired startups to bolster their defenses.
Large players like JPMorgan Chase are tuning custom generative AI models to analyze unstructured data like emails and wire instructions for fraud signals. Their models train on the bank's data to learn to flag suspicious activity.
These models are additive, integrating into existing fraud prevention frameworks. AI is woven in at various points in the payment lifecycle – validating account details upfront, flagging anomalies in-flight, and scoring transactions for fraud risk after the fact. Layering cutting-edge AI on top of proven rule-based engines and other controls creates a formidable, end-to-end defense.
Payment leaders like Visa, Mastercard, Verafin, and others are taking the same approach, incorporating the new AI capabilities into their fraud offerings, and as a result, most FIs benefit from AI-enhanced fraud protection without needing to undertake custom AI fraud projects themselves.
The Biometric Authentication Arms Race
Biometrics vendors broadly agree that in the current arms race between voice synthesis and synthetic voice detection, biometrics alone are inadequate authentication for something as high-risk as an ACH transaction.
Additional layers of probabilistic scoring, such as factoring in behavioral data like time of call, phone number, and device/connection information are vital to selecting the appropriate level of challenge (e.g., password, MFA, out-of-wallet questions).
No biometric solution will ever be perfect – accuracy in identifying synthetic audio can vary substantially, from below 80% to over 98%, depending on how the anti-spoofing model has been trained relative to the live audio.
It will be up to individual institutions to determine their risk appetite and set requirements for accuracy and specificity. As is the case with automation broadly, the performance target for adoption often isn't perfection - it's just better than humans.
Know Your Customer (KYC) AI Fraud Is Already Here
Digital identity proofing, authentication, and Know Your Customer (KYC) checks are getting more difficult due to the democratization of advanced AI image and video generation and editing capabilities.
Generative AI hobbyists online are pioneering prompts, finetuned models, and sophisticated combinations of open- and closed-source technologies to produce highly plausible KYC-style verification photos and videos. Neither of the people in the images below are real. Source: Reddit; u/tabula_rasa22, u/Bizzyguy
Moreover, this is possible with part-time enthusiasts and consumer-attainable technical setups; we should expect companies, hacker groups, organized criminals, and state actors with far deeper pockets to fool KYC checks much more quickly and reliably.
There's already plenty of evidence of this sort of synthetic fraud – "on at least 20 occasions, AI deepfakes had been used to trick facial recognition programs by imitating the people pictured on [stolen] identity cards."
Currently, having any significant number of photos and videos of your face publicly available on the internet, as most of us do, is an increasingly serious liability. Consumers, so far, show no signs of changing their behavior, putting the onus on businesses to effectively identify and block increasingly sophisticated identity theft.
The Bottom Line
The challenges of reliably verifying digital identities online are here to stay until a comprehensive, nationwide solution emerges – an effort likely to span a decade or more. In the meantime, we find ourselves firmly in an era of questioning the humanity and intentions behind every digital interaction. While effective, current AI-proof identity verification methods are highly onerous for users and businesses. As fraudsters ramp up their use of generative AI, this friction will only intensify, fueling demand for more streamlined and holistic identity solutions.
Proposed remedies like Worldcoin may or may not ultimately prove adequate. Worldcoin is a startup co-founded by OpenAI's Sam Altman that aims to become the global choice for a zero-knowledge-proof biometric authentication solution. Regardless, one thing is clear: governments and businesses must unite around a practical framework well before 2035 to avert a full-blown crisis of trust in online identity.
Until then, financial institutions must double down on maximizing the efficiency of their fraud operations and educating clients at every high-risk point of contact on the fraud modalities they may encounter. For FIs to stay one step ahead in the escalating battle against AI-powered fraud, continuous monitoring and improvement are necessary.