Connect with us

Business

Conflicting with compliance: How the finance sector is struggling to implement GenAI

By James Sherlow, Systems Engineering Director, EMEA, for Cequence Security

GenerativeAI has multiple applications in the finance sector from product development to customer relations to marketing and sales. In fact, McKinsey estimates that GenAI has the potential to improve operating profits in the finance sector by between 9-15% and in the banking sector, productivity gains could be between 3-5% of annual revenues. It suggests AI tools could be used to boost customer liaison with AI integrated through APIs to give real-time recommendations either autonomously or via CSRs, to inform decision making and expedite day-to-day tasks for employees, and to decrease risk by monitoring for fraud or elevated instances of risk.

However, McKinsey also warns of inhibitors to adoption in the sector. These include the level of regulation applicable to different processes, which is fairly low with respect to customer relations but high for credit risk scoring, for example, and the data used, some of is in the public domain but some of which comprises personally identifiable information (PII) which is highly sensitive. If these issues can be overcome, the analyst estimates GenAI could more than double the application of expertise to decision making, planning and creative tasks from 25% without to 56%.

Hamstrung by regulations

Clearly the business use cases are there but unlike other sectors, finance is currently being hamstrung by regulations that have yet to catch up with the AI revolution. Unlike in the EU which approved the AI Act in March, the UK has no plans to regulate the technology. Instead, it intends to promote guidelines. The UK Financial Authorities comprising the Bank of England, PRA, and FCA have been canvassing the market on what these should look like since October 2022, publishing the results (FS2/23 – AI and Machine Learning) a year later which showed a strong demand for harmonisation with the likes of the AI Act as well as NIST’s AI Risk Management Framework.

Right now, this means financial providers find themselves in regulatory limbo. If we look at cyber security, for instance, firms are being presented with GenAI-enabled solutions that can assist them with incident detection and response but they’re not able to utilise that functionality because it contravenes compliance requirements. Decision-making processes are a key example as these must be made by a human, tracked and audited and, while the decision-making capabilities of GenAI may be on a par, accountability in remains a grey area. Consequently, many firms are erring on the side of caution and are choosing to deactivate AI functionality within their security solutions.

In fact, a recent EY report found one in five financial services leaders did not think their organisation was well-positioned to take advantage of the potential benefits. Much will depend on how easily the technology can be integrated into existing frameworks, although the GenAI and the Banking on AI: Financial Services Harnesses Generative AI for Security and Service report cautions this may take three to five years. That’s a long time in the world of GenAI, which has already come a long way since it burst on to the market 18 months ago.

Malicious AI

The danger is that while the sector drags its heels, threat actors will show no such qualms and will be quick to capitalise on the technology to launch attacks. FS2/23 makes the point that GenAI could see an increase in money laundering and fraud through the use of deep fakes, for instance, and sophisticated phishing campaigns. We’re still in the learning phase but as the months tick by the expectation is that we can expect to see high-volume self-learning attacks by the end of the year. These will be on an unprecedented scale because GenAI will lower the technological barrier to entry, enabling new threat actors to enter the fray.

Simply blocking attacks will no longer be a sufficient form of defence because GenAI will quickly regroup or pivot the attack automatically without the need to employ additional resource. If we look at how APIs, which are intrinsic to customer services and open banking for instance, are currently protected, the emphasis has been on detection and blocking but going forward we can expect deceptive response to play a far greater role. This frustrates and exhausts the resources of the attacker, making the attacks cost-prohibitive to sustain.

So how should the sector look to embrace AI given the current state of regulatory flux? As with any digital transformation project, there needs to be oversight of how AI will be used within the business, with a working group tasked to develop an AI framework. In addition to NIST, there are a number of security standards that can help here such as ISO 22989, ISO 23053, ISO 23984 and ISO 42001 and the oversight framework set out in DORA (Digital Operational Resilience Act) for third party providers. The framework should encompass the tools the firm has with AI functionality, their possible application in terms of use cases, and the risks associated with these, as well as how it will mitigate any areas of high risk.

Taking a proactive approach makes far more sense than suspending the use of AI which effectively places firms at the mercy of adversaries who will be quick to take advantage of the technology. These are tumultuous times and we can certainly expect AI to rewrite the rulebook when it comes to attack and defence. But firms must get to grips with how they can integrate the technology rather than electing to switch it off and continue as usual.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Navigating the risks of return-to-office mandates

Luke Shipley, CEO and co-founder at Zinc

Is this the year we say goodbye to hybrid work and return to the office full time? Some major companies certainly seem to think so. In recent months, organisations like Santander, PwC, Amazon, Lloyds Bank, JP Morgan, and WPP have all rolled out Return-to-Office (RTO) mandates.

However, the debate over RTO is far from settled, with supporters emphasising workplace culture and critics arguing that it has minimal impact on productivity and puts increased costs on workers. What’s clear, however, is that shifting back to full-time office work after years of hybrid arrangements will bring challenges for businesses. Employees accustomed to flexible work may face stress, long commutes, and rigid schedules, potentially affecting job satisfaction and productivity. Meanwhile, HR teams could struggle with talent retention and recruitment, as workplace expectations evolve.

Striking the right balance between the benefits of in-person collaboration and the risks of employee dissatisfaction will be key for companies navigating this transition.

Impact of RTO policies on the modern workplace

The transition for RTO mandates is already facing considerable backlash. At WPP, the requirement for four days in the office sparked widespread resistance, with thousands of employees signing a petition urging leadership to reverse the policy. Elsewhere, blanket RTO mandates are causing widespread resentment to employees globally. Such rigid policies risk damaging morale and could drive employees toward more flexible opportunities elsewhere.

In today’s workplace, employee satisfaction is closely tied to flexibility. The level of risk executives are willing to take with RTO policies shapes company culture, influencing recruitment, retention, and overall workplace dynamics. Leaders who remain adaptable and responsive to employee needs foster a more positive environment, improving both hiring and long-term engagement. In contrast, strict mandates can create barriers, deterring potential candidates and limiting accessibility.

Business leaders must adapt their approach to risk, recognising that sustainable growth depends not on avoiding risk altogether but on taking the right risks. Executives who prioritise transparency and adaptability in hiring and workforce management lay the groundwork for innovation and resilience. This mindset is crucial not just for growth but for navigating digital transformation and future challenges.

The shift toward flexibility isn’t just a market-driven trend—it’s also a response to evolving regulations and an increasingly hybrid, global workforce. As the nature of work changes, organisations must prioritise their most valuable asset: their people.

Growing strain on HR teams

For HR teams, managing the transition back to the office means addressing employee resistance, differing preferences, and varying levels of readiness for in-person work. This challenge extends beyond existing employees—it also impacts talent acquisition. In a competitive job market, companies may struggle to attract candidates willing to commit to full-time office work. To remain competitive, HR will need to highlight the benefits of in-office culture, career growth opportunities, and a supportive work environment.

Additionally, HR professionals must navigate logistical challenges, including office space management, safety protocols, and hybrid work structures, all while preserving company culture and employee engagement. Striking the right balance between business objectives and employee satisfaction will require a mix of empathy, flexibility, and strategic planning.

Leveraging technology to drive the future of work

Smaller businesses naturally foster a sense of shared success and trust, but for larger organisations, clearly communicating RTO expectations from the outset of the hiring process is essential. Finding the right candidates has become one of the biggest challenges for large enterprises, making strong hiring practices and the right technology critical. By integrating these elements into recruitment systems, hiring managers can ensure that candidates are aligned with RTO mandates from day one.

As hybrid and remote work models evolve, HR teams must also navigate the growing influence of AI in job applications, which can make it harder to assess a candidate’s true fit and transparency. To address these challenges, businesses should leverage technology and automation to streamline hiring. Automating background checks, for example, can cut manual workload in half, improving both efficiency and accuracy.

By embracing automation, larger organisations can replicate the trust and loyalty often seen in smaller companies while building a committed workforce aligned with long-term business goals. This approach allows HR teams to assess candidates with confidence, maintain efficiency, and reduce disruptions for both employers and job seekers.

Continue Reading

Business

Cultivating an Intuitive and Effective Security Culture

John Trest, Chief Learning Officer, VIPRE Security Group, explains how businesses can cultivate a security culture by overcoming security training barriers.

Research shows that human behavior remains the leading driver of data breaches — whether through stolen credentials, phishing attacks, misuse, or simple, inadvertent mistakes by well-meaning individuals. Under pressure, employees become susceptible to manipulation, and when confronted with the complexity of day-to-day work, human vulnerability becomes evident, which the bad actors actively look out for and take advantage of.

Cybersecurity culture

According to behavioural science, employees’ behaviour in the workplace is greatly influenced by the organisation’s existing culture. Whether it’s the successful implementation of technical controls, the likelihood of individuals reporting security incidents, or instances of accidental or malicious insider activity – they are all intricately linked to the cybersecurity culture.

Cybersecurity awareness is the first step to strengthening the human firewall

Good cybersecurity awareness training helps to embed a cybersecurity-conscious culture and security-first attitude in the workplace. Employees and organisations can establish stronger protective measures by enhancing cybersecurity consciousness. Rather than being seen as the “weakest link”, the human should be regarded as the critical defensive barrier for organisations.

With organisations facing increasing risks from social engineering attacks that manipulate behaviour and exploit human error, cybersecurity awareness and training equip employees with the ability to protect digital data from unauthorised access, and respond effectively to threats, countering intentional and unintentional security compromises.

Barriers to effective cybersecurity awareness training

Some key barriers typically impede the successful delivery of cybersecurity awareness training programs, jeopardising organisations’ security posture.

Poor employee engagement –When employees see training as boring or disconnected from their work, engagement suffers. Many security awareness programs compound this issue through complexity, excessive length, lack of relevant scenarios and imagery, and poor accessibility, creating barriers to participation and knowledge retention.

Lack of knowledge retention –Studies demonstrate that significant portions of newly learned information fade from memory rapidly, particularly when cybersecurity training occurs only annually. Such large breaks in training frequency create dangerous knowledge gaps that expose organisations to various security vulnerabilities.

Poor motivation –Cybersecurity training must inform and inspire employees to become active security defenders. Explaining the “why” effectively helps drive behavioural change through extrinsic motivation. However, addressing the “why me” question is crucial for developing more compelling intrinsic motivation. This personal context helps employees understand not just cybersecurity’s general importance, but its specific relevance to their workplace roles, themselves, and their loved ones. Intrinsic motivation is essential for lasting behavioural change and cultivating a truly security-conscious organisational culture. When employees personally connect with security practices, they transform from passive rule-followers to engaged protectors of company assets.

Content obsolescence –The dynamic evolution of security threats challenges cybersecurity awareness efforts, as today’s effective training may prove inadequate against tomorrow’s threats. When content becomes outdated, employees remain vulnerable to new attack techniques. Organisations must embrace continuous learning by implementing dynamic training programs that integrate seamlessly into employee workflows, incorporating emerging threats. By maintaining current, relevant training materials, organisations can ensure employees remain prepared to recognise and respond to evolving cybersecurity threats, ultimately preserving a robust security posture.

Undue focus on regulatory compliance –While regulatory compliance matters, it shouldn’t be the primary metric for cybersecurity awareness and training. Instead, programs should be evaluated by quantifiable improvements: reduced phishing clicks, increased reporting rates, fewer intrusions and breaches, decreased damage, and lower overall cyber risk.

Overcoming security awareness and training barriers

Adopting a more positive approach to security awareness, and viewing employees as positive assets that contribute to a cybersafe workplace, must be the goal. It helps to foster a positive culture in which employees feel more confident about their own actions when handling potential threats.

The cornerstone of engaging security training is twofold: convenience and relevance. When employees can easily access content that directly applies to their roles, they’re naturally more inclined to participate fully and retain critical information. This approach transforms security awareness from an obligatory task into a valuable, integrated part of the workday.

Some thoughts to help overcome the security awareness and training barriers:

Regular reinforcement and knowledge retention focus –To address retention challenges, implement training solutions featuring current, applicable, and engaging content. Incorporate evidence-based learning techniques including interactive elements, straightforward messaging, and real-life scenarios, to enhance retention of information and best practices.

Critical to long-term knowledge retention is the adoption of microlearning approaches. These methodologies divide security education into brief, compelling modules delivered frequently throughout the year. This short-form content helps keep the focus and maintains the attention of employees. By reinforcing key concepts shortly after initial exposure, microlearning creates multiple touchpoints that combat natural memory decay. In doing so, organisations transform cybersecurity awareness from an annual chore into an ongoing, sustainable practice that strengthens organisational security posture.

Gamification –Gamification can be a very useful tool in motivating learners to pay attention to and engage with a learning experience. Given the nature of how the human brain is stimulated by rewards in games, the knowledge obtained from this type of learning experience is retained for longer periods of time.

Though even simple gamification, such as points or leaderboards can have a positive impact on learners, the best way to leverage gamification elements into a learning experience is for them to be interwoven and inherent to the content. For instance, in a gamified cybersecurity scenario, players could assume the role of a White Hat hacker tasked with crafting convincing scam emails to fool unsuspecting staff. Players learn how cybercriminals operate and how to protect themselves by spending time in a hacker’s shoes. And the narrative built around the mechanics of the game makes the interactivities more relevant and compelling.

Role-specific training –Far too often, a broad-brush approach to cybersecurity training is used, making it less relevant for some staff. Targeted training designed for different workplace roles is more effective. For example, a company’s risk and compliance team needs cyber training that takes into account the demands of regulatory bodies, finance teams need to know about business email compromise, security teams must be trained on advances in threat detection, end users must understand how to spot a phishing email or deepfake, and so forth. Training that is tailored specially for business leaders is equally important.

Quality training –The quality of the training experience can make all the difference. Security awareness training is an expertise and a skill with adult learning trends, technology, and best practices. Specialist security trainers and instructional designers know how to get employees to engage with the program, based on an appreciation of employees’ intrinsic and extrinsic motivations, alongside their role-specific requirements.

Cybersecurity culture refers to the collective mindset and behaviours of an organisation’s employees toward protecting information assets. It involves integrating security practices into daily activities, fostering awareness and vigilance, and encouraging proactive reporting of incidents. It also reflects the unspoken beliefs towards security in the organisation. A strong cybersecurity culture is important to help reduce risks by making security a shared responsibility.

Continue Reading

Business

Empowering banks to protect consumers: The impact of the APP Fraud mandate

Source: Finance Derivative

Thara Brooks, Market Specialist, Fraud, Financial Crime & Compliance at FIS

On the 7th October last year, the APP (Authorised Push Payment) fraud reimbursement mandate came into effect in the UK. The mandate aims to protect consumers, but it has already come under immense scrutiny, receiving both support and criticism from all market sectors. But what does it mean for banks and their customers?

Fraud has become a growing concern for the UK banking system and its consumers. According to the ICAEW, the total value of UK fraud stood at £2.3bn in 2023, a 104% increase since 2022, with estimates that the evolution of AI will lead to even bigger challenges. As the IMF points out, greater digitalisation brings greater vulnerabilities, at a time when half of UK consumers are already “obsessed” with checking their banking apps and balances.

These concerns have contributed to the implementation of the PSR’s (Payment Systems Regulator) APP fraud mandate, which was implemented to reimburse the victims of APP fraud. APP fraud occurs when somebody is tricked into authorising a payment from their own bank account. Unlike more traditional fraud, such as payments made from a stolen bank card, APP fraud previously fell outside the scope of conventional fraud protection, as the transaction is technically “authorised” by the victim.

The £85,000 Debate: A controversial adjustment

The regulatory framework for the APP fraud mandate was initially introduced in May 2022. The maximum level of mandatory reimbursement was originally set at £415,000 per claim. The PSR significantly reduced the maximum reimbursement value to £85,000 when the mandate came into effect, however, causing widespread controversy.

According to the PSR, the updated cap will see over 99% of claims (by volume) being covered, with an October review highlighting just 18 instances of people being scammed for more than £415,000, and 411 instances of more than £85,000, from a total of over 250,000 cases throughout 2023. “Almost all high value scams are made up of multiple smaller transactions,” the PSR explains, “reducing the effectiveness of transaction limits as a tool to manage exposure.”

The reduced cap makes a big difference on multiple levels. For financial institutions and payment service providers (PSPs), the lower limit means they’re less exposed to high-value claims. The reduced exposure to unlimited high-value claims has the potential to lower compliance and operational costs, while the £85,000 cap aligns with the Financial Services Compensation Scheme (FSCS) threshold, creating broader consistency across financial redress schemes.

There are naturally downsides to the lower limit, with critics highlighting significant financial shortfalls for victims of high-value fraud. The lower cap may reduce public confidence in the financial system’s ability to protect against fraud, particularly for those handling large sums of money, while small businesses, many of which often deal with large transaction amounts, may find the cap insufficient to cover losses.

The impact on PSPs and their customers

With PSPs responsible for APP fraud reimbursement, institutions need to take the next step when it comes to fraud detection and prevention to minimise exposure to claims within the £85,000 cap. Customers of all types are likely to benefit from more robust security as a result.

The Financial Conduct Authority’s (FCA’s) recommendations include strengthening controls during onboarding, improving transaction monitoring to detect suspicious activity, and optimising reporting mechanisms to enable swift action. Such controls are largely in line with the PSR’s own recommendations, with the institution setting out a number of steps in its final policy statement in December 2023 to mitigate APP scam risks.

These include setting appropriate transaction limits, improving ‘know your customer’ controls, strengthening transaction-monitoring systems and stopping or freezing payments that PSPs consider to be suspicious for further investigation.

All these measures will invariably improve consumer experience, increasing customers’ confidence to transact online safely, as well as giving them peace of mind with quicker reimbursement in case things go awry.

Going beyond the APP fraud mandate

If the PSR’s mandate can steer financial institutions towards implementing more robust security practices, it can only be a good thing. It’s not the only tool that’s shaping the financial security landscape, however.

In October 2024, the UK government introduced new legislation granting banks enhanced powers to combat fraud. An optional £100 excess on fraud claims has been introduced to encourage customer caution and combat moral hazards, while the Treasury has strengthened prevention measures by handing out new powers to high street banks to delay and investigate payments suspected of being fraudulent by 3 days. The extended processing time for suspicious payments may lead to delays in legitimate transactions, making transparent communication and robust safeguards essential to maintain consumer trust.

Further collaborative efforts, such as Meta’s partnership with UK banks through the Fraud Intelligence Reciprocal Exchange (FIRE) program, can also aid the fight against fraud. Thanks to direct intelligence sharing between financial institutions and the world’s biggest social media platform, FIRE enhances the detection and removal of fraudulent accounts across platforms such as Facebook and Instagram, not only disrupting scam operations, but also fostering a safer digital environment for users. The early stages of the pilot have led to action against thousands of scammer-operated accounts, with approximately 20,000 accounts removed based on shared data.

Additionally, education and awareness are crucial measures to protect consumers against APP fraud. Several high street banks have upgraded their banking channels to share timely content about the signs of potential scams, with increased public awareness helping consumers identify and avoid fraudulent schemes.

Improvements in policing strategies are also significantly contributing to the mitigation of APP fraud. Specialized fraud units within police forces have enhanced the precision and efficiency of investigations. The City of London Police and the National Fraud Intelligence Bureau are upgrading the technology for Action Fraud, providing victims with a more accessible and customer-friendly service. Collaborative efforts among police, banks, and telecommunications firms, exemplified by the work of the Dedicated Card and Payment Crime Unit (DCPCU), have enabled the swift exchange of information, facilitating the prompt apprehension of scammers.

How AI is expected to change the landscape

The coming months will be critical in assessing these changes, as institutions, businesses and the UK government work together to shape security against fraud in the ever-changing world of finance.

While fraud is a terrifyingly big business, it’s only likely to increase with the evolution of AI, making it even more critical that such changes are effective. According to PwC, “There is a real risk that hard-fought improvements in fraud defences could be undone if the right measures are not put in place to defend against fraud in an AI-enabled world.”

Chatbots can be used as part of phishing scams, for example, and AI systems can already read text and reproduce sampled voices, making it possible to send messages from “relatives” whose voices have been spoofed in a similar manner to deepfakes.

Along with other innovations, tools and collaborations, however, the APP fraud mandate, UK legislation and FIRE can all contribute towards redressing such technological advances. Together, this can give financial institutions a much-needed boost in the fight against fraud, providing a more secure future for customers.

Continue Reading

Copyright © 2021 Futures Parity.