Business

Clearing the Air: The Importance of Explainable AI in Capital Markets

Source: Finance Derivative

By its very nature, the investment management industry relies heavily on analysis to make good decisions. However, investment managers, like most people, will also use their past experiences as mental shortcuts to help simplify complex decisions – but this can lead to cognitive biases. That’s why “perfect rationality” will always be just out of reach for us as humans. Our very nature means that being truly objective at all times is impossible.

As such, AI and machine learning (AI-ML) can be valuable tools in enhancing human decision-making by overcoming biases with predictive analytics. Furthermore, due to the speed of these technologies, we can make decisions more quickly, generating cost and time savings. In fact, according to Accenture research, AI solutions will add more than $1 billion in value to the financial services industry by 2035.

Unfortunately, many investment management firms still hesitate to implement anything more than the simplest AI tools in 2023. One of the driving reasons behind this is the lack of “explainability” behind AI models and output. Humans find it difficult to fully understand or buy into the decisions and predictions made by AI, because they do not fully understand the inputs and framework. If businesses cannot trust AI, they cannot reap the rewards of improving efficiency, accuracy and reducing workloads. It’s time to turn that around.

The reins holding us back

Many firms have long been using statistical models to support decision-making. With the exponential growth in data, including cheap storage and low-cost computing, AI-based techniques have become popular as a way to deliver actionable results for businesses. Despite reservations about the potential of this technology, the consensus is that AI for capital markets can improve how firms operate their businesses. Indeed, banking as whole will be one of the two industries that will spend the most on AI solutions by 2025, according to IDC

The problem is that organisations often invest in AI programs without achieving buy-in from internal stakeholders, which reduces their potential benefits. In short, people don’t trust what they don’t understand.

We can see this trend from the fact that simpler applications of AI, such as basic regression models have gained traction in the industry in recent years due to them being comparatively easy to understand. However, other models, such as Deep Neutral Networks, that use unstructured data to formulate their decisions – for example, on whether a transaction is fraudulent – are more complex and harder to track. As such, these tools are utilised far less frequently.

This uneven pick up of AI tools is exacerbated by the lack of transparency in some of the features that influence the decisions an AI algorithm makes. To put it another way, it may not always be clear how we get from Point A to Point B – often called the black box effect. This breeds distrust in the tool and if there’s no trust, there’s no buy-in. Crucially, this aversion to advanced technologies can lead to sub-optimal decision-making.

Building trust to get teams on board

Explainability is crucial in building trust in AI tools by transparently showing how an AI or ML model derives outputs from particular inputs. When models are explainable, users and their stakeholders understand them, trust them, and are more likely to use them.

One crucial way to do this is involving end-users in AI model design and establishing mechanisms to fine-tune models. The teams that use AI should work with the teams that build the AI.

Another solution is having measures in place to test the effectiveness of the outputs, such as continuously retraining the model with additional data, to grow confidence in the tool over time. When internal decision-makers fully buy into using AI-ML, they can enhance human intelligence with technology to help overcome cognitive biases that hamper human decision-making.

Reaping the rewards of hybrid intelligence

Making the most of AI-ML models is about enhancing human decisions, not replacing them. The benefits of hybrid intelligence are only unlocked when AI models are embraced side by side with human decision-making in operations, rather than an overreliance on one or the other.

By combining computer-processed data and human insights, firms can drive improved investment decisions, enhance risk management, and ensure better adherence to regulations without being bogged down by human error or bias. According to a recent Deloitte study, financial institutions that use AI in the investment process grow AuM by 8% and raise productivity by 14%. Today’s investment firms cannot afford to miss out on such gains.

In the current capital markets environment, the industry is faced with low margins, high pressure, and increasing competition. Enhancing human decision-making capabilities with advanced AI will be vital for firms to stay competitive. 

This is not a landscape where humans will be replaced by machines, but rather human intelligence must be enhanced with machine intelligence to lower operational risk and uncover emerging opportunities. The business benefits of embracing hybrid intelligence will only happen when there’s trust and buy-in in the tools of the trade.

Leave a Reply

Your email address will not be published. Required fields are marked *

Exit mobile version