Connect with us

Technology

The pros, cons, and best practices when it comes to using generative AI

Colin Redbond, SVP Product Management, SS&C Blue Prism

We’ve all heard about growing interest in generative AI, especially following the release of ChatGPT. The artificial intelligence (AI) large language model, and ones like it, have the potential to revolutionize business operations and accelerate digital transformation journeys. Those that get ahead of this trend are set to gain a significant competitive advantage, with generative AI showing no signs of disappearing anytime soon.

While historically, AI projects have been long, expensive, and complex, generative AI has the potential to reduce time to value for digital transformation initiatives and make advanced technologies more accessible to a greater cross section of people thanks to its ease of use and learning capabilities.

How generative AI supports digital transformation

Digital transformation initiatives went into hyperdrive following the pandemic, but most organizations have yet to maximize business outcomes with their current automation plans. Intelligent automation combines technologies, like generative AI, robotic process automation (RPA), business process management, etc., to reengineer processes and drive business outcomes.

So, what’s the value of generative AI? Its range of capabilities and accessibility is unprecedented, marking an exciting time for the automation space and any sector standing to benefit from advancements in natural language processing, including healthcare, finance, and customer service. However, generative AI is still limited, in a sense, to its own domain knowledge. 

When you combine its unique capabilities with the power of intelligent automation, the impacts for digitalization are extraordinary. Generative AI can be used to automate tasks that were previously only possible for humans to perform, such as generating new marketing copy, designing new product prototypes, or creating personalized content for each customer. It can suggest automations and enable a greater cross section of workers to initiate the development of automations thanks to its ease of use. Automations can then be designed within designated governance parameters and best practices.

By automating these tasks, employees can reduce their workload, supporting work-life balance, while also increasing their efficiency, reducing company-wide costs, and improving the accuracy and quality of their output. Employees work with generative AI to deliver superior results.

When it comes to creative work, humans add color and empathy, which technology can only try to mimic. Generative AI gives them a starting point, helps with idea generation, etc. Human workers provide their uniquely human abilities to read between the lines and their emotional empathy for which AI is not substitute.

For many people, when they think “generative AI,” they think about written content or even AI art, but the use cases for generative AI relate to the day-to-day operations of most office workers.

For example, automated emails can exhibit a greater degree of personalization and improve resolution times. For more complex or high-level emails, generative AI can be used to draft an adequate, personalized email, with all needed information, and a human can then review and tweak if needed.

A similar process can unfold with contact center processes, generative AI bots can progress customer communications significantly before needing to loop in a human employee – if they need to be looped in at all for simple enquiries. This ensures human employees’ time is used effectively and as many customers as possible are being serviced, especially with generative AI bots being able to work around the clock. Error handling is improved with error messages providing context that enables immediate resolution.

Intelligent document processing (IDP) solutions, which use a combination of optical character recognition and AI to extract information that is locked away in documents, are enhanced by generative AI capabilities. This is especially important for financial and healthcare services. Generative AI’s understanding and learning features better equip it to contend with unstructured data, an area that has been a weak point for IDP solutions, which have been confined to structured and semi-structured documents at best.

Generative AI can also help improve the overall performance of intelligent automation systems by allowing them to adapt and learn over time by analyzing the results of previous tasks and using that data to generate new content or output.

The need for governance and risk management to unlock the potential of AI

However, businesses need to make certain considerations before they explore adding generative AI to their toolkit to accelerate digital transformation, since its outputs can have a significant impact on a company’s reputation, revenue, and legal liabilities. A clearly defined corporate governance risk management strategy and set of operating principles around this need to be developed. Done right, generative AI can support an automation strategy that is even more innovative, cost-effective, and productive than anything we have seen before.

Reasons why governance and risk management considerations are important when using generative AI:

  • Help ensure AI-generated content does not violate intellectual property, privacy, or other laws
  • Make sure use of generative AI aligns with your organization’s ethical principles
  • Maintain your organization’s quality standards and confirm outputs are consistent with expectations
  • Ensure the right information is used for the right purposes to protect sensitive information and privacy

How to develop a governance and risk management strategy

A clearly defined strategy and concomitant operating principles maximize the benefits of generative AI while mitigating any outfall. Developing a strategy involves several key steps:

  1. Define the scope: This includes the types of content you will be generating, the data you will be using, and the intended use cases for the content. This helps with identifying the specific risks and governance requirements that apply to your initiatives.
  2. Identify risks: These may include legal risks such as infringing on intellectual property, ethical risks such as bias in generated content, and security risks such as the potential for data breaches. You may need to engage with legal and compliance experts to identify all potential risks.
  3. Establish governance requirements: Based on the risks you’ve identified, establish governance requirements that will mitigate those risks. These may include policies and procedures for data handling, content review, and compliance with regulations.
  4. Develop a risk management plan: Outline how your organization will mitigate and manage risks. This may include risk assessments, monitoring, and regular reviews of governance practices, as well as processes for identifying and addressing any issues that arise.
  5. Train employees: It’s important to train employees on governance and risk management practices. This may include training on data handling, content review, and compliance with regulations. Make sure all employees who will be working with generative AI understand the risks and their responsibilities for mitigating those risks.
  6. Monitor and review: Monitor and review your governance and risk management practices on an ongoing basis. This will help you identify any gaps or issues that need to be addressed and ensure that your practices remain effective over time.

Like all advanced technologies, generative AI’s impact is positive – so long as you take the steps necessary to ensure you’re using it the right way. There’s no turning back the train, generative AI is here to stay – full steam ahead. The best approach is going to be to embrace it with care and work with providers when it comes to decision making around implementation. The possibilities behind generative AI are exciting – so let’s work to get it right and make it a force for good.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Adapting compliance in a fragmented regulatory world

Rasha Abdel Jalil, Director of Financial Crime & Compliance at Eastnets, discusses the operational and strategic shifts needed to stay ahead of regulatory compliance in 2025 and beyond.

As we move through 2025, financial institutions face an unprecedented wave of regulatory change. From the EU’s Digital Operational Resilience Act (DORA) to the UK’s Basel 3.1 rollout and upcoming PSD3, the volume and velocity of new requirements are constantly reshaping how banks operate.

But it’s not just the sheer number of regulations that’s creating pressure. It’s the fragmentation and unpredictability. Jurisdictions are moving at different speeds, with overlapping deadlines and shifting expectations. Regulators are tightening controls, accelerating timelines and increasing penalties for non-compliance. And for financial compliance teams, it means navigating a landscape where the goalposts are constantly shifting.

Financial institutions must now strike a delicate balance: staying agile enough to respond to rapid regulatory shifts, while making sure their compliance frameworks are robust, scalable and future-ready.

The new regulatory compliance reality

By October of this year, financial institutions will have to navigate a dense cluster of regulatory compliance deadlines, each with its own scope, jurisdictional nuance and operational impact. From updated Common Reporting Standard (CRS) obligations, which applies to over 100 countries around the world, to Australia’s new Prudential Standard (CPS) 230 on operational risk, the scope of change is both global and granular.

Layered on top are sweeping EU regulations like the AI Act and the Instant Payments Regulation, the latter coming into force in October. These frameworks introduce new rules and redefine how institutions must manage data, risk and operational resilience, forcing financial compliance teams to juggle multiple reporting and governance requirements. A notable development is Verification of Payee (VOP), which adds a crucial layer of fraud protection for instant payments. This directly aligns with the regulator’s focus on instant payment security and compliance.

The result is a compliance environment that’s increasingly fragmented and unforgiving. In fact, 75% of compliance decision makers in Europe’s financial services sector agree that regulatory demands on their compliance teams have significantly increased over the past year. To put it simply, many are struggling to keep pace with regulatory change.

But why is it so difficult for teams to adapt?

The answer lies in a perfect storm of structural and operational challenges. In many organisations, compliance data is trapped in silos spread across departments, jurisdictions and legacy platforms. Traditional approaches – built around periodic reviews, static controls and manual processes – are no longer fit for purpose. Yet despite mounting pressure, many teams face internal resistance to changing established ways of working, which further slows progress and reinforces outdated models. Meanwhile, the pace of regulatory change continues to accelerate, customer expectations are rising and geopolitical uncertainty adds further complexity.

At the same time, institutions are facing a growing compliance talent gap. As regulatory expectations become more complex, the skills required to manage them are evolving. Yet many firms are struggling to find and retain professionals with the right mix of legal, technical and operational expertise. Experienced professionals are retiring en-masse, while nearly half of the new entrants lack the right experience needed to step into these roles effectively. And as AI tools become more central to investigative and decision-making processes, the need for technical fluency within compliance teams is growing faster than organisations can upskill. This shortage is leaving compliance teams overstretched, under-resourced and increasingly reliant on outdated tools and processes.

Therefore, in this changing environment, the question suddenly becomes how can institutions adapt?

Staying compliant in a shifting landscape

The pressure to adapt is real, but so is the opportunity. Institutions that reframe compliance as a proactive, technology-driven capability can build a more resilient and responsive foundation that’s now essential to staying ahead of regulatory change.

This begins with real-time visibility. As regulatory timelines change and expectations rise, institutions need systems that can surface compliance risks as they emerge, not weeks or months later. This means adopting tools that provide continuous monitoring, automated alerts and dynamic reporting.

But visibility alone isn’t enough. To act on insights effectively, institutions also need interoperability – the ability to unify data from across departments, jurisdictions and platforms. A modern compliance architecture must consolidate inputs from siloed systems into a unified case manager to support cross-regulatory reporting and governance. This not only improves accuracy and efficiency but also allows for faster, more coordinated responses to regulatory change.

To manage growing complexity at scale, many institutions are now turning to AI-powered compliance tools. Traditional rules-based systems often struggle to distinguish between suspicious and benign activity, leading to high false positive rates and operational inefficiencies. AI, by contrast, can learn from historical data to detect subtle anomalies, adapt to evolving fraud tactics and prioritise high-risk alerts with greater precision.

When layered with alert triage capabilities, AI can intelligently suppress low-value alerts and false positives, freeing up human investigators to focus on genuinely suspicious activity. At the more advanced stages, deep learning models can detect behavioural changes and suspicious network clusters, providing a multi-dimensional view of risk that static systems simply can’t match.

Of course, transparency and explainability in AI models are crucial. With regulations like the EU AI Act mandating interpretability in AI-driven decisions, institutions must make sure that every alert or action taken by an AI system is auditable and understandable. This includes clear justifications, visual tools such as link analysis, and detailed logs that support human oversight.

Alongside AI, automation continues to play a key role in modern compliance strategies. Automated sanction screening tools and watchlist screening, for example, help institutions maintain consistency and accuracy across jurisdictions, especially as global lists evolve in response to geopolitical events.

Similarly, customisable regulatory reporting tools, powered by automation, allow compliance teams to adapt to shifting requirements under various frameworks. One example is the upcoming enforcement of ISO 20022, which introduces a global standard for payment messaging. Its structured data format demands upgraded systems and more precise compliance screening, making automation and data interoperability more critical than ever.

This is particularly important in light of the ongoing talent shortages across the sector. With newer entrants still building the necessary expertise, automation and AI can help bridge the gap and allow teams to focus on complex tasks instead.

The future of compliance

As the regulatory compliance landscape becomes more fragmented, compliance can no longer be treated as a tick-box exercise. It must evolve into a dynamic, intelligence-led capability, one that allows institutions to respond to change, manage risk proactively and operate with confidence across jurisdictions.

To achieve this, institutions must rethink how compliance is structured, resourced and embedded into the fabric of financial operations. Those that do, and use the right tools in the process, will be better positioned to meet the demands of regulators today and in the future.

Continue Reading

Business

Cultivating an Intuitive and Effective Security Culture

John Trest, Chief Learning Officer, VIPRE Security Group, explains how businesses can cultivate a security culture by overcoming security training barriers.

Research shows that human behavior remains the leading driver of data breaches — whether through stolen credentials, phishing attacks, misuse, or simple, inadvertent mistakes by well-meaning individuals. Under pressure, employees become susceptible to manipulation, and when confronted with the complexity of day-to-day work, human vulnerability becomes evident, which the bad actors actively look out for and take advantage of.

Cybersecurity culture

According to behavioural science, employees’ behaviour in the workplace is greatly influenced by the organisation’s existing culture. Whether it’s the successful implementation of technical controls, the likelihood of individuals reporting security incidents, or instances of accidental or malicious insider activity – they are all intricately linked to the cybersecurity culture.

Cybersecurity awareness is the first step to strengthening the human firewall

Good cybersecurity awareness training helps to embed a cybersecurity-conscious culture and security-first attitude in the workplace. Employees and organisations can establish stronger protective measures by enhancing cybersecurity consciousness. Rather than being seen as the “weakest link”, the human should be regarded as the critical defensive barrier for organisations.

With organisations facing increasing risks from social engineering attacks that manipulate behaviour and exploit human error, cybersecurity awareness and training equip employees with the ability to protect digital data from unauthorised access, and respond effectively to threats, countering intentional and unintentional security compromises.

Barriers to effective cybersecurity awareness training

Some key barriers typically impede the successful delivery of cybersecurity awareness training programs, jeopardising organisations’ security posture.

Poor employee engagement –When employees see training as boring or disconnected from their work, engagement suffers. Many security awareness programs compound this issue through complexity, excessive length, lack of relevant scenarios and imagery, and poor accessibility, creating barriers to participation and knowledge retention.

Lack of knowledge retention –Studies demonstrate that significant portions of newly learned information fade from memory rapidly, particularly when cybersecurity training occurs only annually. Such large breaks in training frequency create dangerous knowledge gaps that expose organisations to various security vulnerabilities.

Poor motivation –Cybersecurity training must inform and inspire employees to become active security defenders. Explaining the “why” effectively helps drive behavioural change through extrinsic motivation. However, addressing the “why me” question is crucial for developing more compelling intrinsic motivation. This personal context helps employees understand not just cybersecurity’s general importance, but its specific relevance to their workplace roles, themselves, and their loved ones. Intrinsic motivation is essential for lasting behavioural change and cultivating a truly security-conscious organisational culture. When employees personally connect with security practices, they transform from passive rule-followers to engaged protectors of company assets.

Content obsolescence –The dynamic evolution of security threats challenges cybersecurity awareness efforts, as today’s effective training may prove inadequate against tomorrow’s threats. When content becomes outdated, employees remain vulnerable to new attack techniques. Organisations must embrace continuous learning by implementing dynamic training programs that integrate seamlessly into employee workflows, incorporating emerging threats. By maintaining current, relevant training materials, organisations can ensure employees remain prepared to recognise and respond to evolving cybersecurity threats, ultimately preserving a robust security posture.

Undue focus on regulatory compliance –While regulatory compliance matters, it shouldn’t be the primary metric for cybersecurity awareness and training. Instead, programs should be evaluated by quantifiable improvements: reduced phishing clicks, increased reporting rates, fewer intrusions and breaches, decreased damage, and lower overall cyber risk.

Overcoming security awareness and training barriers

Adopting a more positive approach to security awareness, and viewing employees as positive assets that contribute to a cybersafe workplace, must be the goal. It helps to foster a positive culture in which employees feel more confident about their own actions when handling potential threats.

The cornerstone of engaging security training is twofold: convenience and relevance. When employees can easily access content that directly applies to their roles, they’re naturally more inclined to participate fully and retain critical information. This approach transforms security awareness from an obligatory task into a valuable, integrated part of the workday.

Some thoughts to help overcome the security awareness and training barriers:

Regular reinforcement and knowledge retention focus –To address retention challenges, implement training solutions featuring current, applicable, and engaging content. Incorporate evidence-based learning techniques including interactive elements, straightforward messaging, and real-life scenarios, to enhance retention of information and best practices.

Critical to long-term knowledge retention is the adoption of microlearning approaches. These methodologies divide security education into brief, compelling modules delivered frequently throughout the year. This short-form content helps keep the focus and maintains the attention of employees. By reinforcing key concepts shortly after initial exposure, microlearning creates multiple touchpoints that combat natural memory decay. In doing so, organisations transform cybersecurity awareness from an annual chore into an ongoing, sustainable practice that strengthens organisational security posture.

Gamification –Gamification can be a very useful tool in motivating learners to pay attention to and engage with a learning experience. Given the nature of how the human brain is stimulated by rewards in games, the knowledge obtained from this type of learning experience is retained for longer periods of time.

Though even simple gamification, such as points or leaderboards can have a positive impact on learners, the best way to leverage gamification elements into a learning experience is for them to be interwoven and inherent to the content. For instance, in a gamified cybersecurity scenario, players could assume the role of a White Hat hacker tasked with crafting convincing scam emails to fool unsuspecting staff. Players learn how cybercriminals operate and how to protect themselves by spending time in a hacker’s shoes. And the narrative built around the mechanics of the game makes the interactivities more relevant and compelling.

Role-specific training –Far too often, a broad-brush approach to cybersecurity training is used, making it less relevant for some staff. Targeted training designed for different workplace roles is more effective. For example, a company’s risk and compliance team needs cyber training that takes into account the demands of regulatory bodies, finance teams need to know about business email compromise, security teams must be trained on advances in threat detection, end users must understand how to spot a phishing email or deepfake, and so forth. Training that is tailored specially for business leaders is equally important.

Quality training –The quality of the training experience can make all the difference. Security awareness training is an expertise and a skill with adult learning trends, technology, and best practices. Specialist security trainers and instructional designers know how to get employees to engage with the program, based on an appreciation of employees’ intrinsic and extrinsic motivations, alongside their role-specific requirements.

Cybersecurity culture refers to the collective mindset and behaviours of an organisation’s employees toward protecting information assets. It involves integrating security practices into daily activities, fostering awareness and vigilance, and encouraging proactive reporting of incidents. It also reflects the unspoken beliefs towards security in the organisation. A strong cybersecurity culture is important to help reduce risks by making security a shared responsibility.

Continue Reading

Business

Empowering banks to protect consumers: The impact of the APP Fraud mandate

Source: Finance Derivative

Thara Brooks, Market Specialist, Fraud, Financial Crime & Compliance at FIS

On the 7th October last year, the APP (Authorised Push Payment) fraud reimbursement mandate came into effect in the UK. The mandate aims to protect consumers, but it has already come under immense scrutiny, receiving both support and criticism from all market sectors. But what does it mean for banks and their customers?

Fraud has become a growing concern for the UK banking system and its consumers. According to the ICAEW, the total value of UK fraud stood at £2.3bn in 2023, a 104% increase since 2022, with estimates that the evolution of AI will lead to even bigger challenges. As the IMF points out, greater digitalisation brings greater vulnerabilities, at a time when half of UK consumers are already “obsessed” with checking their banking apps and balances.

These concerns have contributed to the implementation of the PSR’s (Payment Systems Regulator) APP fraud mandate, which was implemented to reimburse the victims of APP fraud. APP fraud occurs when somebody is tricked into authorising a payment from their own bank account. Unlike more traditional fraud, such as payments made from a stolen bank card, APP fraud previously fell outside the scope of conventional fraud protection, as the transaction is technically “authorised” by the victim.

The £85,000 Debate: A controversial adjustment

The regulatory framework for the APP fraud mandate was initially introduced in May 2022. The maximum level of mandatory reimbursement was originally set at £415,000 per claim. The PSR significantly reduced the maximum reimbursement value to £85,000 when the mandate came into effect, however, causing widespread controversy.

According to the PSR, the updated cap will see over 99% of claims (by volume) being covered, with an October review highlighting just 18 instances of people being scammed for more than £415,000, and 411 instances of more than £85,000, from a total of over 250,000 cases throughout 2023. “Almost all high value scams are made up of multiple smaller transactions,” the PSR explains, “reducing the effectiveness of transaction limits as a tool to manage exposure.”

The reduced cap makes a big difference on multiple levels. For financial institutions and payment service providers (PSPs), the lower limit means they’re less exposed to high-value claims. The reduced exposure to unlimited high-value claims has the potential to lower compliance and operational costs, while the £85,000 cap aligns with the Financial Services Compensation Scheme (FSCS) threshold, creating broader consistency across financial redress schemes.

There are naturally downsides to the lower limit, with critics highlighting significant financial shortfalls for victims of high-value fraud. The lower cap may reduce public confidence in the financial system’s ability to protect against fraud, particularly for those handling large sums of money, while small businesses, many of which often deal with large transaction amounts, may find the cap insufficient to cover losses.

The impact on PSPs and their customers

With PSPs responsible for APP fraud reimbursement, institutions need to take the next step when it comes to fraud detection and prevention to minimise exposure to claims within the £85,000 cap. Customers of all types are likely to benefit from more robust security as a result.

The Financial Conduct Authority’s (FCA’s) recommendations include strengthening controls during onboarding, improving transaction monitoring to detect suspicious activity, and optimising reporting mechanisms to enable swift action. Such controls are largely in line with the PSR’s own recommendations, with the institution setting out a number of steps in its final policy statement in December 2023 to mitigate APP scam risks.

These include setting appropriate transaction limits, improving ‘know your customer’ controls, strengthening transaction-monitoring systems and stopping or freezing payments that PSPs consider to be suspicious for further investigation.

All these measures will invariably improve consumer experience, increasing customers’ confidence to transact online safely, as well as giving them peace of mind with quicker reimbursement in case things go awry.

Going beyond the APP fraud mandate

If the PSR’s mandate can steer financial institutions towards implementing more robust security practices, it can only be a good thing. It’s not the only tool that’s shaping the financial security landscape, however.

In October 2024, the UK government introduced new legislation granting banks enhanced powers to combat fraud. An optional £100 excess on fraud claims has been introduced to encourage customer caution and combat moral hazards, while the Treasury has strengthened prevention measures by handing out new powers to high street banks to delay and investigate payments suspected of being fraudulent by 3 days. The extended processing time for suspicious payments may lead to delays in legitimate transactions, making transparent communication and robust safeguards essential to maintain consumer trust.

Further collaborative efforts, such as Meta’s partnership with UK banks through the Fraud Intelligence Reciprocal Exchange (FIRE) program, can also aid the fight against fraud. Thanks to direct intelligence sharing between financial institutions and the world’s biggest social media platform, FIRE enhances the detection and removal of fraudulent accounts across platforms such as Facebook and Instagram, not only disrupting scam operations, but also fostering a safer digital environment for users. The early stages of the pilot have led to action against thousands of scammer-operated accounts, with approximately 20,000 accounts removed based on shared data.

Additionally, education and awareness are crucial measures to protect consumers against APP fraud. Several high street banks have upgraded their banking channels to share timely content about the signs of potential scams, with increased public awareness helping consumers identify and avoid fraudulent schemes.

Improvements in policing strategies are also significantly contributing to the mitigation of APP fraud. Specialized fraud units within police forces have enhanced the precision and efficiency of investigations. The City of London Police and the National Fraud Intelligence Bureau are upgrading the technology for Action Fraud, providing victims with a more accessible and customer-friendly service. Collaborative efforts among police, banks, and telecommunications firms, exemplified by the work of the Dedicated Card and Payment Crime Unit (DCPCU), have enabled the swift exchange of information, facilitating the prompt apprehension of scammers.

How AI is expected to change the landscape

The coming months will be critical in assessing these changes, as institutions, businesses and the UK government work together to shape security against fraud in the ever-changing world of finance.

While fraud is a terrifyingly big business, it’s only likely to increase with the evolution of AI, making it even more critical that such changes are effective. According to PwC, “There is a real risk that hard-fought improvements in fraud defences could be undone if the right measures are not put in place to defend against fraud in an AI-enabled world.”

Chatbots can be used as part of phishing scams, for example, and AI systems can already read text and reproduce sampled voices, making it possible to send messages from “relatives” whose voices have been spoofed in a similar manner to deepfakes.

Along with other innovations, tools and collaborations, however, the APP fraud mandate, UK legislation and FIRE can all contribute towards redressing such technological advances. Together, this can give financial institutions a much-needed boost in the fight against fraud, providing a more secure future for customers.

Continue Reading

Copyright © 2021 Futures Parity.