Business

Harnessing the sweet spot for AI and risk management

Craig Adams, Managing Director, EMEA at Protecht examines why it is important to weigh up the AI opportunity versus the challenges when it comes to risk management

AI here to stay and set to revolutionise the way many organisations manage their key business functions forever – risk and compliance included.

When ChatGPT first launched to the public in late 2022 it reached one million users in five days and surpassing 100 million monthly users in just two months. It didn’t take long before major organisations like BT were announcing plans to replace thousands of workers with artificial intelligence (AI), while newspapers were quickly filled with stories about AI already doing a better job than trained humans at various business tasks and applications.

Without a doubt AI presents a huge opportunity across risk & compliance functions, particularly in areas like automating everyday mundane tasks, offering rapid assessments, and better understanding/management of the risks faced.  Whether it’s identifying gaps in policies and control frameworks or analysing thousands of pages of regulations across multiple jurisdictions in a matter of seconds, the potential is truly enormous.

However, that’s not to say it doesn’t create a few risks of its own as well. First and foremost, risk management and compliance functions are in the very earliest stages of AI integration, which means the current lack of understanding will almost certainly lead to teething problems and mistakes. Indeed, in many organisations, risk professionals find themselves working round the clock to understand how best to retrospectively integrate AI into long-established, well-run programmes and processes.

Furthermore, AI is far from flawless in its current iteration. For all the positive headlines generated, ChatGPT has also garnered numerous negative ones as well, particularly relating to high profile gaffs, biased content, and limited knowledge of the world beyond 2021 (at least for now).

Therefore, in order to make the most of AI’s vast potential without falling foul of its current limitations, industry professionals need to look very closely at both the opportunities and challenges it presents, before finding the right path forwards to successful implementation. In fact, understanding the technology, its application, and the risks it poses, should all be considered fundamental requirements for risk managers before partial or full-scale deployment is even considered.

Realising AI’s power and potential

Just like many other industries, one of the biggest opportunities that AI presents to risk and compliance professionals is its ability to automate time consuming and repetitive tasks that humans often struggle with because of their mundane nature. For example, AI-driven customer service solutions have been shown to not only reduce operational costs, but also to improve the quality of service.

Behind the customer service function, however, AI has the potential to provide invaluable insights into an organisation’s risk profile by analysing vast amounts of data at a pace incomparable to human capabilities. For instance, AI can be used to assess thousands of pages of complex global regulations before making accurate recommendations on exactly where specific regulations apply. This kind of capability can significantly reduce the workloads of risk and compliance professionals, enabling them to spend much more of their time on strategically important activities, while also improving overall business security.

However, it’s important to note that AI powered systems are only ever as good as the data they have to work from. If AI relies on flawed data, it may fail to identify critical risks or comply with relevant regulations and start influencing the reasoning of the AI system itself.

Organisations must therefore ensure that the data feeding into their AI systems is accurate and unbiased at all times, which isn’t easy. Failure to do so not only raises the risk of serious errors but also huge reputational damage to the organisations involved and the application of AI across the profession.

Another crucial concern is the potential replacement of human workers and the impact on the wider employment market. While it’s clear that AI will increasingly be used to automate a range of functions currently carried out by human members of staff, replacing people entirely isn’t without its drawbacks. Most obviously, there is an inherent and irreplaceable value in human insight, judgement, and decision-making, especially in areas as critical as risk management, where experience plays a massive role across the board.

Harnessing the sweet spot for risk management

So, with all these considerations in mind, how can organisations find the sweet spot that allows them to enjoy the benefits of AI while guarding themselves against the inherent risks?

Here is a best practice checklist, that will ensure a structured approach for AI deployment with full transparency and visibility across the risk management function.

  • Start by assessing AI’s impact on the organisation’s overall risk profile and identify any compliance challenges created as a result.
  • Develop organisational controls such as an AI policy that defines acceptable use of AI by employees and technology controls that limits access and monitors use of AI services over the web in line with your policy.
  • Raise awareness through employee communication and training on what they can and can’t do around AI and outline the risks with knowledge gaps or even information fabrication from this sort of technology can bring.
  • Define your risk appetite around AI so you can agree how hungry or adverse you are as an organisation on embracing both the opportunities and the downside risk it represents when it goes wrong and develop metrics to measure.

In the longer term, it is vital to establish effective controls over the remit given to AI and its performance levels. These should include a commitment to manual oversight, ongoing ad-hoc testing, and the implementation of any other relevant mechanisms to ensure AI operates within the organisation’s risk appetite and compliance framework. In this context, a hybrid approach, where AI and humans work in tandem, is most likely to provide the best results.

While many organisations will find the prospect of implementing an AI-powered risk management daunting, there’s never been a better time to start exploring it.

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version