Connect with us

Technology

CONVERSATIONAL AI: WHAT IS IT AND HOW CAN IT DRIVE GROWTH IN THE FINANCIAL SERVICES SECTOR?

Source: Finance Derivative

Over the last seven years, there has been a significant shift towards digital engagement. Its growth shows no signs of slowing, with consumers becoming more accustomed to using digital channels for all aspects of life. They’re using mobile banking apps, artificial intelligence (AI) infused virtual assistants to get real-time security alerts, and they’re even moving money between accounts using just their voice.

And in many cases, consumers are interacting with AI without even realizing it. From waking a home voice assistant with a simple “Hey” to using speech-to-text functions for hands-free typing, both are built using Conversational AI.

What is Conversational AI?

Conversational AI is the application of machine learning to allow humans to interact naturally with devices, machines, and computers by simply speaking to them. As a person speaks, the device works to understand and find the best answer, providing a response with its own natural-sounding speech.

It may sound simple, but the technology behind conversational AI is complex. It involves a multi-step process that requires a massive amount of computing power. Delivering a seamless user experience requires several complex models that need to run in less than 300 milliseconds.

Conversational AI is primarily based on three key processes:

  • Automatic Speech Recognition (ASR), which takes spoken words and converts them into readable text.
  • Natural Language Processing (NLP), which reads written text, understands the context and intent and then generates an intelligent text response.
  • Text-to-Speech (TTS), which converts the NLP text response to natural-sounding speech, with human-like intonation and clearly articulated words.

Transforming the Customer Experience with Conversational AI

The Financial Services Industry is under pressure, with rising levels of risk, higher volumes of customer service enquiries, and the need to develop digital channels to balance the closing of branches, especially in a post-COVID-19 environment. Just a one-point decline in a business’ customer experience score can equal $124 million in lost revenue for multi-channel banks.

Conversational AI can significantly improve the customer service experience throughout the customer journey. AI can enable customer service agents to deliver an improved customer experience, providing them with real-time insights to reduce their workload and deliver a speedier interaction for customers. It can generate personalized, recommended offers and next-best actions for each customer based on their individual data. It can even transcribe calls and take notes for the agent, reducing their post-call reporting time and allowing the agent to quickly and accurately support more customers.

With growing volumes of customer calls, a virtual AI assistant can be available day and night to assist with simple enquiries such as account-related questions or product applications. Customers can have conversational, human-like dialogue with intelligent, instantaneous responses, customized for the user it’s talking to. AI virtual assistants can also support customers with disabilities who might not be able to interact with the keyboard or screen.

UK-based NatWest’s digital assistant, Cora, is handling 58% more inquiries year on year, completing 40% of those interactions without human intervention. According to Juniper Research, 90% of customer interactions will be automated by 2022, saving banks $7 billion by 2023.

Agents should be focused on delivering the best customer experience, which means that fraud can go undetected at the call center. In fact, there’s a reported 80% of fraud going undetected today. As a call takes place, conversational AI can spot fraudulent activity like identity theft by using sentiment and confidence analysis, pattern recognition and voice-based identity authorization.

Conversational AI for Document Extraction and Risk Monitoring

Financial applications/market monitoring pulls unstructured data from many sources such as the news, customer applications, events, documents, proprietary data, market moves or filings. To collate such a large amount of varied data, businesses can use NLP to extract data from documents, regardless of language or layout. It can perform text analytics, entity and event extraction, and relevance and sentiment analysis to extract the most important information for decision making.

This type of AI document analysis can detect early warning signs of risk, like defaults, bankruptcies, lawsuits, or fraud. It can also improve lending decisions, be used for investment risk management and accelerate due diligence for Anti-Money Laundering (AML) and Know Your Customer (KYC) compliance.

By making this monitoring automatic, risk mitigation can minimize costs, and businesses can target investment opportunities with alpha returns and can gain operational efficiencies by customizing NLP for specific use cases. Banks and insurers can also use document processing to process all types of applications across unstructured document types, speeding up document turnaround time, reducing error rates and significantly improving document processing costs.

Accelerating Business Performance with Conversational AI

While AI continues to become more mainstream, there’s a shift towards e-commerce and a digital-first customer experience, where people are using AI in their day-to-day activities — in fact 46 percent of people are using it every single day.

Throughout the customer experience, conversational AI can deliver a smoother, faster experience, it can be on-hand to help all day, every day, enable agents to do their best work and reduce fraud — all at the same time.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

The generative AI revolution is here – but is your cloud network ready to embrace it?

Paul Gampe, Chief Technology Officer, Console Connect

Generative Artificial Intelligence is inserting itself into nearly every sector of the global economy as well as many aspects of our lives. People are already using this groundbreaking technology to query their bank bills, request medical prescriptions, and even write poems and university essays.

In the process, generative AI has the potential to unlock trillions of dollars in value for businesses and radically transform the way we work. In fact, current predictions suggest generative AI could automate up to 70 percent of employees’ time today.

Paul Gampe

But regardless of the application or industry, the impact of generative AI can be most keenly felt in the cloud computing ecosystem.

As companies rush to leverage this technology in their cloud operations, it is essential to first understand the network connectivity requirements – and the risks – before deploying generative AI models safely, securely, and responsibly.

Data processing

One of the primary connectivity requirements for training generative AI models in public cloud environments is affordable access the scale of datasets. By their very definition, large language models (LLM) are extremely large. To train these LLMs requires vast amounts of data and hyper-fast compute      and the larger the dataset the more the demand for computing power.

The enormous processing power required to train these LLMs is only one part of the jigsaw. You also need to manage the sovereignty, security, and privacy requirements of the data transiting in your public cloud. Given that 39 percent of businesses experienced a data breach in their cloud environment in 2022, it makes sense to explore the private connectivity products on the market which have been designed specifically for high performance and AI workloads.

Regulatory trends

Companies should pay close attention to the key public policies and regulation trends which are rapidly emerging around the AI landscape. Think of a large multinational bank in New York that has 50 mainframes on its premises where they keep their primary computing capacity; they want to do AI analysis on that data, but they cannot use the public internet to connect to these cloud environments because many of their workloads have regulatory constraints. Instead, private connectivity affords them the ability to get to where the generative AI capability exists and sits within the regulatory frameworks of their financing industry.

Even so, the maze of regulatory frameworks globally is very complex and subject to change. The developing mandates of the General Data Protection Regulation (GDPR) in Europe, as well as new GDPR-inspired data privacy laws in the United States, have taken a privacy-by-design approach whereby companies must implement techniques such as data mapping and data loss prevention to make sure they know where all personal data is at all times and protect it accordingly.

Sovereign borders

As the world becomes more digitally interconnected, the widespread adoption of generative AI technology will likely create long-lasting challenges around data sovereignty. This has already prompted nations to define and regulate their own legislation regarding where data can be stored, and where the LLMs processing that data can be housed.

Some national laws require certain data to remain within the country’s borders, but this does not necessarily make it more secure. For instance, if your company uses the public internet to transfer customer data to and from London on a public cloud service, even though it may be travelling within London, somebody can still intercept that data and route it elsewhere around the world.

As AI legislation continues to expand, the only way your company will have assurance of maintaining your sovereign border may be to use a form of private connectivity while the data is in transit. The same applies to AI training models on the public cloud; companies will need some type of connectivity from their private cloud to their public cloud where they do their AI training models, and then use that private connectivity to bring their inference models back.

Latency and network congestion
Latency is a critical factor in terms of interactions with people. We have all become latency sensitive, especially with the volume of voice and video calls that we experience daily, but the massive datasets used for training AI models can lead serious latency issues on the public cloud.

For instance, if you’re chatting with an AI bot that’s providing you customer service and latency begins to exceed 10 seconds, the dropout rate accelerates. Therefore, using the public internet to connect your customer-facing infrastructure with your inference models is potentially hazardous for a seamless online experience, and a change in response time could impact your ability to provide meaningful results.

Network congestion, meanwhile, could impact your ability to build models on time. If you have significant congestion in getting your fresh data into your LLMs it’s going to start to backlog, and you won’t be able to achieve the learning outcomes that you’re hoping for. The way to overcome this is by having large pipes to ensure that you don’t encounter congestion in moving your primary data sets into where you’re training your language model.

Responsible governance

One thing everybody is talking about right now is governance. In other words, who gets access to the data and where is the traceability of the approval of that data available?

Without proper AI governance, there could be high consequences for companies that may result in commercial and reputational damage. A lack of supervision when implementing generative AI models on the cloud could easily lead to errors and violations, not to mention the potential exposure of customer data and other proprietary information. Simply put, the trustworthiness of generative AI all depends on how companies use it.

Examine your cloud architecture

Generative AI is a transformative field with untold opportunities for countless businesses, but IT leaders cannot afford to get their network connectivity wrong before deploying its applications.

Remember, data accessibility is everything when it comes to generative AI, so it is essential to define your business needs in relation to your existing cloud architecture. Rather than navigating the risks of the public cloud, the high-performance flexibility of a Network-as-a-Service (NaaS) platform can provide forward-thinking companies with a first-mover advantage.

The agility of NaaS connectivity makes it simpler and safer to adopt AI systems by interconnecting your clouds with a global network infrastructure that delivers fully automated switching and routing on demand. What’s more, a NaaS solution also incorporates the emerging network technology that supports the governance requirements of generative AI for both your broader business and the safeguarding of your customers.

Continue Reading

Business

How tech can tackle the manufacturing skills shortage

By Mikko Urho, CEO, Visual Components

In modern times, manufacturers are unable to call upon a constant supply of readily available workers. In fact, the skills shortfall is at its most severe level since 1989 in the UK. A perfect storm of factors such as the cost of living crisis, Brexit, the pandemic, continued economic instability and shifting age demographics have exacerbated the issue.

Now, over three-quarters (77%) of employers are struggling to fill available roles. But without these skills, firms will be fully hindered in their ability to commission, design and optimise their production systems, which includes any robotic technology they bring in. What actions must organisations take now to prevent their manufacturing lines from being disrupted, or even worse, witnessing a full shutdown?

Helping under-fire teams

As talent pipelines diminish, manufacturers must explore other ways of addressing the growing skills gap. Technology holds promise. Robots can undertake a range of different functions that previously fell under the responsibility of staff. But unlike humans, robots don’t tire throughout the day and subsequently the risk of mistakes is much lower. It’s also harder for humans to replicate exactly the same level of accuracy when completing a manual task many times. Modern-day robot deployments can complete welding, cutting, painting and other processes with ease

However, for robots to fully handle these tasks for humans, they have to be manually programmed. In a survey of manufacturing decision-makers in the UK undertaken by Visual Components, over half (55%) state that manual programming is a necessity to complete welding, cutting painting and other tasks. This requires a specific human skill set and demands considerable time from the people involved.

Over a third (35%) of manufacturers say that the manual process takes between a week and a month, leaving robots completely idle before they can provide value. It might be even longer if it needs to be replicated across a number of robots from different providers. How can manufacturers set their robots to task straight away?

Building new skills

Robot offline programming (OLP) brings the robot and its work cell into the digital environment. In an intuitive simulated interface, movements and workflows are accurately replicated. Full testing can take place in a sandbox environment before anything is deployed in the real world. Common programming issues around collision avoidance and joint-limit violations can be fully avoided.

OLP provides a number of advantages to manufacturers. Instead of a much slower sequential process to programming and deployment thereafter, concurrent planning allows these two processes to take place at the same time. The software is able to identify different features in a workpiece or specific component, including pockets and holes, and incorporate this into a programming procedure. Even more crucially, its straightforward interface means that employees can easily upskill in the programming of robots, effectively plugging the skills gap.

It’s a logical and intuitive solution that can encourage novice users or new recruits to get up to speed. There’s even an opportunity for them to learn how to deploy different robot brands, with functionality across all the major providers. This further broadens the knowledge of staff and prepares them for future integrations.

Many businesses are also adopting remote working practices, and OLP can be incorporated to suit this strategy. Staff can access the system from anywhere, preventing them from needing to be on-site. Not only do manufacturers tackle staff shortages, but can encourage greener practices with dispersed workforces. And lastly, the technology futureproofs the business against employee departures. With all knowledge stored safely within the software, organisations also protect themselves from the risk of skilled staff leaving or retiring, where they would otherwise take their expertise with them. 

Grasping the opportunities

The skills crisis is a significant challenge for UK manufacturers, but it also opens doors for innovation. As various socioeconomic factors intensify worker shortages, manufacturers need to adopt proactive measures to sustain productivity and competitiveness. Leveraging technology, especially through the implementation of robotics and OLP, offers a practical solution to address the skills gap.

OLP improves the efficiency and precision of robotic tasks and provides valuable upskilling opportunities for the workforce. With user-friendly software, even those new to the field can develop their skills and integrate robots into the production line, avoiding the costs and time associated with traditional methods.

While manufacturers may have limited control over the supply of highly skilled workers, they can certainly harness technology to empower their existing employees and drive transformation from within. Embracing these technological advancements mitigates the impact of the skills shortage and crucially positions manufacturers for future growth and innovation.

Continue Reading

Business

Harnessing AI to Navigate Regulatory Complexity in Banking and Finance

Source: Finance Derivative

By Harry Borovick, General Counsel, Luminance

The global banking and finance sector is navigating an increasingly complex regulatory landscape, compounded by uncertain macroeconomic conditions, marketplace competition, and heightened customer expectations. These pressures have increased the volume and difficulty[RW1]  of compliance requirements and raised the risk of substantial fines for businesses operating in this sector. Amidst these challenges, AI can offer practical solutions to ensure compliance and mitigate risks.

The Challenge

Whether it’s successfully navigating the London Interbank Offered Rate (LIBOR) or remaining compliant with newly implemented regulation like Digital Operational Resilience Act (DORA), financial institutions are no stranger to new regulations. From antitrust and competition laws to sustainability-focused regulations like the Financial Disclosures Regulation 2019/2088, growing regulatory complexity presents significant hurdles for legal departments within financial institutions. Additionally, the sheer volume and fragmented nature of the data at hand adds significant friction to legal workflows.[RW2] 

Legal teams in financial institutions are mandated to stay aware of incoming changes and must be equipped to handle them. After all, non-compliance carries severe economic, operational, and reputational consequences. In 2021, the UK’s Financial Conduct Authority (FCA) issued over £500 million in fines for non-compliance. The stakes are higher than ever, and the repercussions of failing to meet regulatory standards can be catastrophic. For instance, a prominent financial institution faced massive fines for failing to comply with anti-money laundering regulations, even being subjected to the first ever criminal charge issued by the FCA. This event highlights the significant financial and reputational risks involved when institutions fail to adhere to regulatory measures.

However, the issue extends beyond fines and potential financial loss. The stress exerted on industry professionals tasked with ensuring compliance is leading to increased mental health issues and high turnover rates. Reportedly, 60% of compliance staff feel burned out by the responsibilities they face. The pressure to maintain compliance amidst an ever-evolving regulatory environment should not be overlooked, as it may lead to a talent drain within the sector.

The Solution

AI provides a tangible solution to the compliance challenges faced by financial institutions. But what does that look like in practice?

  1. Effective Third-Party Risk Management: Financial institutions must maintain effective third-party risk management to identify and reduce risk across their service providers. This is often a manual, labour intensive task, but remains deeply important to compliance. Financial institutions can conduct thorough due diligence by centralising service provider contracts to ensure comprehensive oversight and risk management. AI provides a far more comprehensive ability to search through these documents, automatically surfacing key figures and grouping documents which are conceptually similar.
  • Accelerated Compliance Process: AI can automate documents routing across the team, ensuring an effective review process. AI automtically flag renewal dates in contracts, reducing time spent searcging for these vital data points.
  • Empowering Non-Legal Teams: Non-legal departments can use AI to generate standard agreements based on compliant, gold-standard language through self-service contract generation tools, streamlining approvals and reducing delays.
  • Navigating Global Complexity:  Global companies are often juggling multiple regulatory regimes, making compliance an even more complex, labour-intensive task. AI tools [AM3] can quickly and comprehensively analyse data sets [RW4] in multiple languages, removing barriers in global operations and expediting the document review process.

But what does this look like in practice? A leading US-headquartered private equity firm used Luminance to review nearly 1,000 documents, including NDAs, credit agreements, and fund documents. A project estimated to take two weeks manually was completed significantly faster, with over 350 LIBOR definition clauses identified upon upload. This kind of saving is instrumental to company success, particularly in such a competitive environment.

In an era where regulatory requirements are becoming more stringent and the consequences of non-compliance are more severe, financial institutions must leverage AI to navigate the evolving compliance landscape and maintain a competitive edge in a challenging sector. [RW5] Within a trend towards both financial transparency and environmental intervention which will only keep growing, taking steps now will be a key step for business continuity tomorrow. Adoption of AI-driven solutions enables compliance teams to keep up with the pace of regulation, even as it rapidly changes and evolves.


Just avoiding repetition of ‘complex’ – some other word than ‘difficulty’ might be better, if you prefer. [RW1]

Again, just finding ways to paraphrase complex/add some nuance. [RW2]

We want to be careful about appearing too self promotional, or the editor will reject. We should flag when we share the byline that the editor may reject the para which talks about lumi tech specifically due to neutrality guidelines.  [AM3]

Is ‘data room’ a term of art Luminance uses? It’s new to me, if so. [RW4]

This is fine in itself, but feels like it’s repeating what’s already been said in the byline. We could do with a bit of a step forward in the thinking that really brings the point home. [RW5]

One option would be to say something like:

“The 60% of compliance staff who report burnout might tell us all we need to know about the landscape right now, but there’s no reason to believe that this challenge will ebb in the future. Within a trend towards both financial transparency and environmental intervention which will only keep growing, taking steps now will be a key step for business continuity tomorrow…”

And then spell out the adoption of AI-driven solutions (which themselves will evolve at pace alongside changing legislation/regulation)?

Continue Reading

Copyright © 2021 Futures Parity.