Connect with us

Artificial intelligence

How the EU AI Act will impact financial institutions

SHARE:

Published

on

The EU AI Act, which came into force on 1 August 2024, is a significant legislative development for financial institutions (FIs). With provisions starting to apply over a period of six to 36 months, most of the Act’s rules will be enforceable after two years. And it has major implications, especially for FIs using high-risk AI systems like credit scoring models and risk assessment tools, writes David Salloum, Group Legal Director at Eastnets.

AI’s potential in trade finance, risk management and in the fight against financial crime is vast. However, effective regulation is needed to ensure trust and safeguard fundamental rights. The EU AI Act is a key step in this direction, ensuring AI evolves safely and ethically.

The key elements

The Act applies to AI systems marketed, used, or impacting individuals within the EU, including those from non-EU countries. It takes a risk-based approach, categorising AI applications based on risk levels, with stringent requirements for high-risk use cases, such as critical infrastructure and law enforcement among a list of eight areas.

High-risk AI systems must meet strict data quality and governance standards. This ensures software isn’t trained on poor or incomplete data that could lead to an adverse outcome. Documentation and traceability and transparency are required to ensure AI can be audited and its decisions explained. Human oversight is mandated to allow for intervention in AI decisions, which is crucial for applications in medical diagnoses or legal cases.

Additionally, the Act mandates the use of synthetic or anonymized data when training AI models to detect and correct bias. Synthetic data, artificially created rather than collected from real-world events, helps ensure AI systems are fair and accurate. If these methods reduce bias, then AI system providers must use them.

Consequences for financial institutions

Advertisement

For FIs, adhering to these requirements is crucial. In doing so, there are some important actions to take. Firstly, if it doesn’t already exist, an AI governance body must be created. This needs to be accountable for all use of AI throughout the business and should carefully evaluate systems to determine their classification under the Act.

For example, AI systems used for credit scoring or creditworthiness evaluation will likely be classified as high-risk given their significant impact on individuals’ access to financial resources. Conversely, the European Parliament proposes that AI systems deployed for detecting fraud in financial services should not be considered high-risk under the Act. This nuance allows FIs to continue innovating in fraud detection while ensuring compliance with the Act’s overarching principles.

Secondly, this group should assure human oversight of all systems to ensure AI is being used transparently ethically, fairly, securely and with privacy controls. For example, when the software interacts with humans, it must disclose that it is AI. Furthermore, AI-generated content must be clearly labelled to avoid misrepresentation. Users must also be informed when AI uses emotion recognition or biometric categorisation.

Finally, any governance body should also ensure high quality of input data, to minimise biases and prevent discriminatory outcomes. It must also be 100 per cent transparent in its own work, recording its decisions and making these available when needed.

Failure to meet the Act’s requirements could result in fines. The European Artificial Intelligence Board will oversee implementation, with national authorities responsible for supervision and enforcement. There will also be a public database for high-risk AI systems to ensure transparency and oversight.

Long term implications

General-purpose AI systems, like those used in image and speech recognition, will be addressed in future acts. The AI Act’s flexible framework allows it to evolve with AI technology, ensuring that new developments are adequately regulated.

FIs must stay informed about these changes, as they will affect how AI can be used for efficiency and innovation in areas like document processing, risk management, and compliance.

Balancing innovation and regulation

While over-regulation can hinder innovation, the EU AI Act provides a sorely needed set of guiderails for a rapidly developing technology. On one hand, it aims to ensure AI systems are safe, protecting people’s rights. On the other, it’ll enhance innovation and investment thanks to the transparency and openness the Act demands. Developers will be better able to collaborate and improve the software.

However, the regulation must remain flexible and move at the same speed as the technology. Policy makers must consider feedback and collaborate with industry; especially from sectors such as financial services where the rules have greatest impact. And while ethical considerations must always be at the heart of the regulation, this must be balanced with positive encouragement for the technology.

As the countdown to compliance begins, FIs should be positive. There’s undoubtedly work to do in terms of compliance, but the outcome will be a better for everyone: developers, businesses and consumers.

* The information in this blog post is for general informational purposes only and does not constitute legal advice. The content is based on the author’s interpretation and may not reflect the most current legal developments. Readers should seek professional legal advice for specific concerns. All rights to referenced third-party content remain with their respective authors.

Share this article:

EU Reporter publishes articles from a variety of outside sources which express a wide range of viewpoints. The positions taken in these articles are not necessarily those of EU Reporter.

Trending