fbpx

Bank of England warns against AI ‘complacency’

by Wire Tech

Bank of England warns against AI ‘complacency’

UK financial services regulator wants to increase understanding of AI’s benefits and how firms are managing the risks it poses, as take-up grows and use cases increase

The Bank of England is launching a consortium where private sector finance organisations and artificial intelligence (AI) experts can provide knowledge on the technology’s benefits to the sector and manage risk.

During an international financial conference in Hong Kong, Sarah Breeden, deputy governor of financial stability at the Bank of England, said regulation must stay ahead of AI take-up.

She said this will “help us understand more deeply not only AI’s potential benefits, but also the different approaches firms are taking to managing those risks which could amount to financial stability risks.”

The regulator will then try to spread best practices, and decide when regulatory guidelines and guardrails are needed. “The power and use of AI is growing fast, and we mustn’t be complacent,” said Breeden. “We know from past experience with technological innovation in other sectors of the economy that it’s hard to retrospectively address risks once usage reaches systemic scale.”

The Bank of England and the FCA have been tracking how financial services firms in the UK are using AI and machine learning. The results of its latest survey, cited by Breeden, covered 120 firms. It found that three-quarters are already using some form of AI in their operations.

She said this included all the large UK and international banks, insurers and asset managers that responded, and represented a 53% increase on the same survey in 2022.

Breeden said 17% of all use cases are using foundation models, including large language models such as OpenAI’s GPT4. She added that some of the most common early use cases for AI have been “fairly low-risk from a financial stability standpoint”. For example, the Bank of England survey found that 41% are using AI to optimise internal processes, while 26% are using AI to enhance customer support.

Next phase of take-up

But Breeden said that many firms are now using AI to mitigate the external risks they face from cyber attack (37%), fraud (33%) and money laundering (20%).

According to Breeden, a significant evolution from a financial stability perspective is the emergence of new use cases. For example, she said the survey revealed that 16% of respondents are using AI for credit risk assessment, and 19% are planning to do so over the next three years. A total of 11% are using it for algorithmic trading, with a further 9% planning to do so in the next three years.

“AI is expected to bring considerable potential benefits for productivity and growth in the financial sector and the rest of the economy,” said Breeden. “But for the financial sector to harness those benefits, we, as financial regulators, must have policy frameworks that are designed to manage any risks to financial stability that come with them. Economic stability underpins growth and prosperity. It would be self-defeating to allow AI to undermine it.

“We don’t want to be left in the position of choosing between, on the one hand, letting a powerful new technology threaten financial stability, and on the other, preventing its use and losing out on growth and innovation – simply because we don’t have the policy frameworks to enable its safe adoption,” she said.

Read more about GenAI

  • Many organisations are testing out uses for generative AI, but how are they getting on? We speak to five early adopters to find out the lessons learned so far.
  • Employees are using GenAI primarily for spelling and grammar checking, while business chiefs are using it for data analysis.
  • Deloitte survey shows business and IT leaders are optimistic about GenAI, but academic researchers warn of AI training time bom.

Breeden pointed to two issues to keep “a watchful eye on” in regards to generative AI in financial services.

She said at a micro level, ensuring the safety and soundness of individual firms, regulators should continue to assure themselves that technology-agnostic regulatory frameworks are sufficient to mitigate the financial stability risks from AI, as models become ever more powerful and adoption increases.

“We need to be focused in particular on ensuring that managers of financial firms are able to understand and manage what their AI models are doing as they evolve autonomously beneath their feet,” said Breeden.

“We should keep our regulatory perimeters under review, should the financial system become more dependent on shared AI technology and infrastructure systems,” she added.

Originally published at ECT News

You may also like

Unlock the Power of Technology with Tech-Wire: The Ultimate Resource for Computing, Cybersecurity, and Mobile Technology Insights

Copyright @2023 All Right Reserved