The rapid advancement of artificial intelligence, particularly generative AI tools like ChatGPT, is ushering in a new era of financial fraud. Malicious actors are leveraging these technologies to craft more convincing and sophisticated scams, putting consumers and institutions at greater risk than ever.
ChatGPT and WormGPT have enabled fraudsters to craft more convincing and sophisticated phishing messages, as well as overcoming language barriers and increasing personalized attacks. One security group recorded a 135% increase in novel social engineering attacks with earmarks of AI tools in the first two months of 2023. Another report estimates a 3000% increase in deepfake-based fraud between 2022 and 2023. Numerous indicators suggest that AI enables threat actors to craft sophisticated and targeted attacks at speed and scale.
But it's not just phishing. Fraudsters are exploiting AI across multiple fronts:
- Deepfakes and Identity Fraud: In February 2024, a finance worker was tricked into transferring $25 million to scammers using a deepfake of the company's CFO in a video call. On at least 20 other occasions, AI-generated deep fakes were used to fool facial recognition systems with fake identity cards. There are even examples predating the advent of generative AI.
- Website and App Spoofing: AI tools can tremendously aid website development, fueling a rise in fraud. Attackers are using AI to rapidly create spoofed banking websites and mobile apps nearly indistinguishable from the real thing. 50% of phishing links now lead to these fake data-harvesting sites. Consumers continue to be targeted at a slightly higher rate than corporations, underscoring the importance of educating users and helping them protect themselves.
- Corporate Fraud – False Vendor Invoicing: Corporate bank accounts have also become attractive targets for a new breed of sophisticated social engineering scams. Fraudsters are exploiting the complex structures and communication gaps within large organizations. The attack pattern is straightforward but effective: fraudsters spoof an email or website to impersonate a trusted supplier and request payment to a fraudulent account. Even when banks flag these transactions and call clients to confirm, they often do so because they believe the request is legitimate. Employees frequently won't realize until the legitimate vendor inquires about the missed payment.
- FakeGPT and Data Theft: Even AI tools themselves aren't safe – a wave of nearly 1,000 fake ChatGPT websites created to collect user data have been identified and blocked by Meta from being shared on its platforms.
Large FIs and Vendors Fight Back with AI
Financial institutions aren't sitting idle. Over the past year, incumbents like MasterCard, Jack Henry, FIS, PSCU, and more made considerable investments in AI and partnered with or acquired startups to bolster their defenses.
Large players like JPMorgan Chase are tuning custom generative AI models to analyze unstructured data like emails and wire instructions for fraud signals. Their models train on the bank's data to learn to flag suspicious activity.
These models are additive, integrating into existing fraud prevention frameworks. AI is woven in at various points in the payment lifecycle – validating account details upfront, flagging anomalies in-flight, and scoring transactions for fraud risk after the fact. Layering cutting-edge AI on top of proven rule-based engines and other controls creates a formidable, end-to-end defense.
Payment leaders like Visa, Mastercard, Verafin, and others are taking the same approach, incorporating the new AI capabilities into their fraud offerings, and as a result, most FIs benefit from AI-enhanced fraud protection without needing to undertake custom AI fraud projects themselves.
The Biometric Authentication Arms Race
Biometrics vendors broadly agree that in the current arms race between voice synthesis and synthetic voice detection, biometrics alone are inadequate authentication for something as high-risk as an ACH transaction.
Additional layers of probabilistic scoring, such as factoring in behavioral data like time of call, phone number, and device/connection information are vital to selecting the appropriate level of challenge (e.g., password, MFA, out-of-wallet questions).
No biometric solution will ever be perfect – accuracy in identifying synthetic audio can vary substantially, from below 80% to over 98%, depending on how the anti-spoofing model has been trained relative to the live audio.
It will be up to individual institutions to determine their risk appetite and set requirements for accuracy and specificity. As is the case with automation broadly, the performance target for adoption often isn't perfection - it's just better than humans.
Know Your Customer (KYC) AI Fraud Is Already Here
Digital identity proofing, authentication, and Know Your Customer (KYC) checks are getting more difficult due to the democratization of advanced AI image and video generation and editing capabilities.
Generative AI hobbyists online are pioneering prompts, finetuned models, and sophisticated combinations of open- and closed-source technologies to produce highly plausible KYC-style verification photos and videos. Neither of the people in the images below are real. Source: Reddit; u/tabula_rasa22, u/Bizzyguy
Moreover, this is possible with part-time enthusiasts and consumer-attainable technical setups; we should expect companies, hacker groups, organized criminals, and state actors with far deeper pockets to fool KYC checks much more quickly and reliably.
There's already plenty of evidence of this sort of synthetic fraud – "on at least 20 occasions, AI deepfakes had been used to trick facial recognition programs by imitating the people pictured on [stolen] identity cards."
Currently, having any significant number of photos and videos of your face publicly available on the internet, as most of us do, is an increasingly serious liability. Consumers, so far, show no signs of changing their behavior, putting the onus on businesses to effectively identify and block increasingly sophisticated identity theft.
The Bottom Line
The challenges of reliably verifying digital identities online are here to stay until a comprehensive, nationwide solution emerges – an effort likely to span a decade or more. In the meantime, we find ourselves firmly in an era of questioning the humanity and intentions behind every digital interaction. While effective, current AI-proof identity verification methods are highly onerous for users and businesses. As fraudsters ramp up their use of generative AI, this friction will only intensify, fueling demand for more streamlined and holistic identity solutions.
Proposed remedies like Worldcoin may or may not ultimately prove adequate. Worldcoin is a startup co-founded by OpenAI's Sam Altman that aims to become the global choice for a zero-knowledge-proof biometric authentication solution. Regardless, one thing is clear: governments and businesses must unite around a practical framework well before 2035 to avert a full-blown crisis of trust in online identity.
Until then, financial institutions must double down on maximizing the efficiency of their fraud operations and educating clients at every high-risk point of contact on the fraud modalities they may encounter. For FIs to stay one step ahead in the escalating battle against AI-powered fraud, continuous monitoring and improvement are necessary.