Business

Conflicting with compliance: How the finance sector is struggling to implement GenAI

By James Sherlow, Systems Engineering Director, EMEA, for Cequence Security

GenerativeAI has multiple applications in the finance sector from product development to customer relations to marketing and sales. In fact, McKinsey estimates that GenAI has the potential to improve operating profits in the finance sector by between 9-15% and in the banking sector, productivity gains could be between 3-5% of annual revenues. It suggests AI tools could be used to boost customer liaison with AI integrated through APIs to give real-time recommendations either autonomously or via CSRs, to inform decision making and expedite day-to-day tasks for employees, and to decrease risk by monitoring for fraud or elevated instances of risk.

However, McKinsey also warns of inhibitors to adoption in the sector. These include the level of regulation applicable to different processes, which is fairly low with respect to customer relations but high for credit risk scoring, for example, and the data used, some of is in the public domain but some of which comprises personally identifiable information (PII) which is highly sensitive. If these issues can be overcome, the analyst estimates GenAI could more than double the application of expertise to decision making, planning and creative tasks from 25% without to 56%.

Hamstrung by regulations

Clearly the business use cases are there but unlike other sectors, finance is currently being hamstrung by regulations that have yet to catch up with the AI revolution. Unlike in the EU which approved the AI Act in March, the UK has no plans to regulate the technology. Instead, it intends to promote guidelines. The UK Financial Authorities comprising the Bank of England, PRA, and FCA have been canvassing the market on what these should look like since October 2022, publishing the results (FS2/23 – AI and Machine Learning) a year later which showed a strong demand for harmonisation with the likes of the AI Act as well as NIST’s AI Risk Management Framework.

Right now, this means financial providers find themselves in regulatory limbo. If we look at cyber security, for instance, firms are being presented with GenAI-enabled solutions that can assist them with incident detection and response but they’re not able to utilise that functionality because it contravenes compliance requirements. Decision-making processes are a key example as these must be made by a human, tracked and audited and, while the decision-making capabilities of GenAI may be on a par, accountability in remains a grey area. Consequently, many firms are erring on the side of caution and are choosing to deactivate AI functionality within their security solutions.

In fact, a recent EY report found one in five financial services leaders did not think their organisation was well-positioned to take advantage of the potential benefits. Much will depend on how easily the technology can be integrated into existing frameworks, although the GenAI and the Banking on AI: Financial Services Harnesses Generative AI for Security and Service report cautions this may take three to five years. That’s a long time in the world of GenAI, which has already come a long way since it burst on to the market 18 months ago.

Malicious AI

The danger is that while the sector drags its heels, threat actors will show no such qualms and will be quick to capitalise on the technology to launch attacks. FS2/23 makes the point that GenAI could see an increase in money laundering and fraud through the use of deep fakes, for instance, and sophisticated phishing campaigns. We’re still in the learning phase but as the months tick by the expectation is that we can expect to see high-volume self-learning attacks by the end of the year. These will be on an unprecedented scale because GenAI will lower the technological barrier to entry, enabling new threat actors to enter the fray.

Simply blocking attacks will no longer be a sufficient form of defence because GenAI will quickly regroup or pivot the attack automatically without the need to employ additional resource. If we look at how APIs, which are intrinsic to customer services and open banking for instance, are currently protected, the emphasis has been on detection and blocking but going forward we can expect deceptive response to play a far greater role. This frustrates and exhausts the resources of the attacker, making the attacks cost-prohibitive to sustain.

So how should the sector look to embrace AI given the current state of regulatory flux? As with any digital transformation project, there needs to be oversight of how AI will be used within the business, with a working group tasked to develop an AI framework. In addition to NIST, there are a number of security standards that can help here such as ISO 22989, ISO 23053, ISO 23984 and ISO 42001 and the oversight framework set out in DORA (Digital Operational Resilience Act) for third party providers. The framework should encompass the tools the firm has with AI functionality, their possible application in terms of use cases, and the risks associated with these, as well as how it will mitigate any areas of high risk.

Taking a proactive approach makes far more sense than suspending the use of AI which effectively places firms at the mercy of adversaries who will be quick to take advantage of the technology. These are tumultuous times and we can certainly expect AI to rewrite the rulebook when it comes to attack and defence. But firms must get to grips with how they can integrate the technology rather than electing to switch it off and continue as usual.

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version