SRM Blog - The Bottom Line

Examining the EU’s AI Act & Its Broader Implications

Written by Simon Rose | Apr 30, 2024 4:09:00 PM


Aspects of artificial intelligence have been gradually introduced into business settings for years, starting with robotic process automation and continuing into fraud detection systems, CRM tools, and other varieties of knowledge management.

Given the exponential advancements of large language models and widely available generative AI tools like ChatGPT over the past 18 months, however, policy concerns over the potential misuse of AI technology have intensified just as rapidly.

In March, the European Commission ratified the AI Act – the most substantive legislative attempt to date to establish guardrails and rules of engagement.

The EU has a track record of taking regulatory action before the United States and other North American countries on technology developments (think data privacy and open banking.) And while the AI Act does not explicitly address financial services, it’s not much of a stretch to extrapolate how its broad strokes could form the model for the financial sector in the US and beyond.

As we consult clients globally on the risks and rewards of artificial intelligence, we felt it necessary to share some of the finer points of the EU legislation with our audience.

Looking for Clues

Regulators invariably find themselves playing catch-up; cast into a reactive role by nature, they rarely engage with industry players early enough to shape efficient and constructive frameworks to achieve objectives that enjoy consensus support at their core. This dynamic can be frustrating for financial institutions, but it doesn’t alter their need to operate within its constraints.

It’s reasonable to assume the AI Act will serve as an EU “umbrella policy,” with the agencies overseeing each sector applying its themes to build specificity for their focus areas. The US rulebook won’t be far afield from the EU’s.

For example, the AI Act’s ban on “social scoring systems that could lead to discrimination” certainly sounds applicable to credit scoring models and less directly to concerns of bias inadvertently being programmed into algorithms or “learned” from large data sets. Likewise, controls on using techniques like facial recognition to enable remote biometric identification of individuals in public settings may impose boundaries on envisioned enhancements to some fraud detection systems.

The Inescapable Human Factor

Fundamentally, the AI Act’s declaration that “AI should be a human-centric technology” signals a premise where innovators and regulators can hopefully find common ground. Dating back centuries, it’s been human nature to attempt to automate, generating efficiencies in performing routine (and increasingly complex) tasks. It’s hard to escape the human factor at the center, whether in determining how to develop and apply these advancements or managing the behaviors and temptations in any setting. Look no further than fraud – where AI provides a new front on which perpetrators and defenders scramble to stay ahead of each other.

One of the most significant risks of AI is its ability to destroy a firm’s reputation rapidly. That danger alone should be sufficient cause to maintain a human-centric approach to implementing these tools.

The Bottom Line

Business adoption of artificial intelligence has been more gradual and long-term than recent headlines suggest. Nonetheless, the rapid ascent of generative AI and widely available tools like ChatGPT have created a clear inflection point, with regulators taking notice and stepping into action.

Financial institutions should continue to explore opportunities to judiciously deploy AI technology while staying abreast of developments on the legislative and regulatory front. A partner like SRM can assist in decoding the implications of a watershed development like the EU’s AI Act. In a future blog, we’ll elaborate on the critical interplay between human and machine factors in AI business processes.