Emerging Risks and Opportunities of Generative AI for Banks: A Singapore Perspective

Knowledge Bank - Emerging Risks and Opportunities of Generative AI for Banks A Singapore Perspective


Project MindForge is a collaboration among financial industry participants, including the Monetary Authority of Singapore (MAS), Citi, DBS, HSBC, OCBC, Standard Chartered, The Association of Banks in Singapore (ABS) and UOB, and technology partners Accenture, Google and Microsoft. The project builds on the work of the Veritas initiative to examine the impact and potential risks of Generative artificial intelligence (AI) on financial services. The MindForge consortium developed this whitepaper setting out a private sector perspective for the responsible use of Generative AI. The consortium also experimented with potential industry use cases and will conduct further work to explore their value and viability.

Generative AI includes diverse techniques for creating content, spanning text, images, and other audio-visual elements. It is driven on large machine learning models known as foundation models (FMs), with a subset called large language models (LLMs), trained on trillions of words for various natural language tasks. The adoption of Generative AI across industries, including the banking sector, has a significant potential to improve customer satisfaction, enhance employee experience while augmenting their productivity, reduce costs, enhance decision-making, and mitigate risks. This paper draws primarily on consortium members’ experience with language-based Generative AI systems (supported by LLMs), the earliest forms of Generative AI to gain widespread adoption among financial institutions (FIs).

The advancement of Generative AI has opened up new commercial, social, and technological opportunities. However, this advancement is clearly double-edged. The whitepaper aims to examine risks posed by Generative AI systems that go beyond those of predictive, “traditional” AI and how such risks extend beyond the scope of current Fairness, Ethics, Accountability and Transparency (FEAT) Principles, published in 2018.

This paper enumerates these risks across seven dimensions: Fairness and Bias, Ethics and Impact, Accountability and Governance, Transparency and Explainability, Legal and Regulatory, Monitoring and Stability, and Cyber and Data Security.

The technological considerations for adopting Generative AI effectively, securely, and responsibly are also crucial. To support this goal, the paper introduces a platform-agnostic reference architecture. It highlights principal components and underlines the significance of guardrails, continuous monitoring, and human involvement throughout the development and deployment lifecycle.

Developing industry use cases can help the industry better understand this technology’s impact. Use cases are intended to provide examples on how risk assessment can be conducted for Generative AI solutions. As the industry’s use of Generative AI solutions evolves, these solutions can better position FIs to thrive in a rapidly changing environment.

Generative AI is a relatively new technology, of which its full range of risks and impacts are not fully known, controllable or capable of being deployed responsibly. But it is a promising technology that, when used correctly, can improve commercial, social, and governance outcomes for FIs around the world.

 

Download the report here.