FINMA Issues Guidance on AI Use in Financial Institutions
On 18 December 2024, FINMA released Guidance 08/2024 on governance and risk management for artificial intelligence (AI) applications in supervised financial institutions. This new guidance outlines best practices for identifying and managing risks associated with AI. It builds on FINMA's earlier guidance in its 2023 Risk Monitor, which briefly addressed the use of AI in the financial sector.
Publié: 19 décembre 2024
Partner
Partner, Co-head of Investigations, Head of ESG
Associate
Publié: 19 décembre 2024 | ||
Expertise |
Banking and Finance Financial Services and Fintech |
In 2023, the Swiss Financial Market Supervisory Authority FINMA ("FINMA") included brief guidance on the use of AI in the financial sector in its Risk Monitor report. The 2023 guidance focused on common-sense principles, such as ensuring the reliability of AI tools and the understandability of their outputs.
The new FINMA AI Guidance builds on this to address two overarching themes:
- Properly identifying the specific risks associated with AI applications;
- Implementing adequate measures to effectively manage those risks.
While the new AI Guidance continues the principles outlined in earlier materials, FINMA now offers greater granularity regarding its supervisory expectations.
Scope of the AI Guidance
Although banks seem to be the primary focus of the AI Guidance, FINMA does not explicitly specify the type of financial institutions to which the AI Guidance applies. As a result, the AI Guidance can be understood to apply broadly to supervised financial institutions.
The AI Guidance does not differentiate between AI applications, appearing to cover both business-critical applications (that may have been developed specifically for an institution) and those integrated into widely available software or devices. However, FINMA's expectations are likely to be most relevant for applications that are core to the business.
AI-specific risk identification
In general, FINMA does not see AI as an inherently high-risk application. Instead, the nature and extent of the risks depend on the specific AI applications used and how they are integrated into the institution's activities and processes.
The AI Guidance highlights the general risks associated with the use of AI tools, which can be summarized as follows:
- Quality, Accuracy, and Bias: The main risks of AI applications relate to the quality, accuracy, and potential bias of their outputs. It is now widely recognized that AI may generate responses that are inaccurate, incomplete, or even fabricated.
- Explainability: AI outputs often lack explainability, as users generally do not know the underlying logic or mechanisms behind AI-generated answers. This creates challenges when, for instance, the institutions need to explain these outcomes to third parties, such as clients, auditors, or FINMA itself.
- Data Protection: Data protection remains a concern, particularly as users may input confidential information into tools that are not designed for secure handling of such data. However, FINMA observes that supervised institutions are generally well aware of data protection risks, and may even overemphasize them compared to the other risks.
- Operational risks: FINMA also identifies operational risks as a key concern, particularly model risks (such as lack of robustness or stability), as well as IT and cyber risks.
Managing AI risks
Financial institutions are permitted to use AI in their activities and processes. They must, however, effectively and adequately manage the associated risks, focusing on three key points:
- Governance: Supervised institutions should maintain an inventory of AI tools used within the organization, classifying their risks, and setting out the measures taken to address and mitigate them. The governance framework should also assign clear responsibilities and accountabilities for the development, implementation, monitoring and use of AI.
- Training and Documentation: Staff should receive adequate training on the proper use of AI applications, supported by readily available documentation to guide their actions.
- Testing and Monitoring: Supervised financial institutions must regularly test, evaluate, and continuously monitor the performance and accuracy of their AI tools. This includes conducting routine checks on the quality and reliability of outputs, documenting processes and findings, and, where necessary, commissioning an independent review.
Focus on third party services providers
As stressed in the AI Guidance, the principles outlined by FINMA should also be applied when dealing with third-party services providers. Supervised financial institutions often rely on external providers for AI tools, data processing or infrastructure. This can be achieved through clear contractual obligations, regular monitoring and formal acknowledgment by such providers of responsibilities regarding data protection, AI output accuracy and confidentiality.
Outlook
The AI Guidance offers valuable insight into FINMA's evolving expectations and practices. As AI applications become more and more integrated into everyday software and applications, the limits of the AI Guidance are likely to become evident, requiring financial institutions to strike a balance between meeting regulatory expectations and striving for increased efficiency while ensuring business continuity.
Please do not hesitate to contact us if you have any further questions on this subject.
Legal Note: The information contained in this Smart Insight newsletter is of general nature and does not constitute legal advice.
Contactez-nous
Ariel Ben Hattar |
Partner, Geneva ariel.benhattar@lenzstaehelin.com Tel: +41 58 450 70 00 |
|
Valérie Menoud |
Partner, Co-head of Investigations, Head of ESG, Geneva valerie.menoud@lenzstaehelin.com Tel: +41 58 450 70 00 |
|
Isy Isaac Sakkal |
Associate, Geneva isy.sakkal@lenzstaehelin.com Tel: +41 58 450 70 00 |
|
Fedor Poskriakov |
Deputy Managing Partner, Head of Fintech, Geneva fedor.poskriakov@lenzstaehelin.com Tel: +41 58 450 70 00 |
|
Lukas Morscher |
Partner, Head of Technology and Outsourcing, Zurich lukas.morscher@lenzstaehelin.com Tel: +41 58 450 80 00 |
|
Patrick Schärli |
Partner, Zurich patrick.schaerli@lenzstaehelin.com Tel: +41 58 450 80 00 |