Business
The generative AI revolution is here – but is your cloud network ready to embrace it?
Paul Gampe, Chief Technology Officer, Console Connect
Generative Artificial Intelligence is inserting itself into nearly every sector of the global economy as well as many aspects of our lives. People are already using this groundbreaking technology to query their bank bills, request medical prescriptions, and even write poems and university essays.
In the process, generative AI has the potential to unlock trillions of dollars in value for businesses and radically transform the way we work. In fact, current predictions suggest generative AI could automate up to 70 percent of employees’ time today.
But regardless of the application or industry, the impact of generative AI can be most keenly felt in the cloud computing ecosystem.
As companies rush to leverage this technology in their cloud operations, it is essential to first understand the network connectivity requirements – and the risks – before deploying generative AI models safely, securely, and responsibly.
Data processing
One of the primary connectivity requirements for training generative AI models in public cloud environments is affordable access the scale of datasets. By their very definition, large language models (LLM) are extremely large. To train these LLMs requires vast amounts of data and hyper-fast compute and the larger the dataset the more the demand for computing power.
The enormous processing power required to train these LLMs is only one part of the jigsaw. You also need to manage the sovereignty, security, and privacy requirements of the data transiting in your public cloud. Given that 39 percent of businesses experienced a data breach in their cloud environment in 2022, it makes sense to explore the private connectivity products on the market which have been designed specifically for high performance and AI workloads.
Regulatory trends
Companies should pay close attention to the key public policies and regulation trends which are rapidly emerging around the AI landscape. Think of a large multinational bank in New York that has 50 mainframes on its premises where they keep their primary computing capacity; they want to do AI analysis on that data, but they cannot use the public internet to connect to these cloud environments because many of their workloads have regulatory constraints. Instead, private connectivity affords them the ability to get to where the generative AI capability exists and sits within the regulatory frameworks of their financing industry.
Even so, the maze of regulatory frameworks globally is very complex and subject to change. The developing mandates of the General Data Protection Regulation (GDPR) in Europe, as well as new GDPR-inspired data privacy laws in the United States, have taken a privacy-by-design approach whereby companies must implement techniques such as data mapping and data loss prevention to make sure they know where all personal data is at all times and protect it accordingly.
Sovereign borders
As the world becomes more digitally interconnected, the widespread adoption of generative AI technology will likely create long-lasting challenges around data sovereignty. This has already prompted nations to define and regulate their own legislation regarding where data can be stored, and where the LLMs processing that data can be housed.
Some national laws require certain data to remain within the country’s borders, but this does not necessarily make it more secure. For instance, if your company uses the public internet to transfer customer data to and from London on a public cloud service, even though it may be travelling within London, somebody can still intercept that data and route it elsewhere around the world.
As AI legislation continues to expand, the only way your company will have assurance of maintaining your sovereign border may be to use a form of private connectivity while the data is in transit. The same applies to AI training models on the public cloud; companies will need some type of connectivity from their private cloud to their public cloud where they do their AI training models, and then use that private connectivity to bring their inference models back.
Latency and network congestion
Latency is a critical factor in terms of interactions with people. We have all become latency sensitive, especially with the volume of voice and video calls that we experience daily, but the massive datasets used for training AI models can lead serious latency issues on the public cloud.
For instance, if you’re chatting with an AI bot that’s providing you customer service and latency begins to exceed 10 seconds, the dropout rate accelerates. Therefore, using the public internet to connect your customer-facing infrastructure with your inference models is potentially hazardous for a seamless online experience, and a change in response time could impact your ability to provide meaningful results.
Network congestion, meanwhile, could impact your ability to build models on time. If you have significant congestion in getting your fresh data into your LLMs it’s going to start to backlog, and you won’t be able to achieve the learning outcomes that you’re hoping for. The way to overcome this is by having large pipes to ensure that you don’t encounter congestion in moving your primary data sets into where you’re training your language model.
Responsible governance
One thing everybody is talking about right now is governance. In other words, who gets access to the data and where is the traceability of the approval of that data available?
Without proper AI governance, there could be high consequences for companies that may result in commercial and reputational damage. A lack of supervision when implementing generative AI models on the cloud could easily lead to errors and violations, not to mention the potential exposure of customer data and other proprietary information. Simply put, the trustworthiness of generative AI all depends on how companies use it.
Examine your cloud architecture
Generative AI is a transformative field with untold opportunities for countless businesses, but IT leaders cannot afford to get their network connectivity wrong before deploying its applications.
Remember, data accessibility is everything when it comes to generative AI, so it is essential to define your business needs in relation to your existing cloud architecture. Rather than navigating the risks of the public cloud, the high-performance flexibility of a Network-as-a-Service (NaaS) platform can provide forward-thinking companies with a first-mover advantage.
The agility of NaaS connectivity makes it simpler and safer to adopt AI systems by interconnecting your clouds with a global network infrastructure that delivers fully automated switching and routing on demand. What’s more, a NaaS solution also incorporates the emerging network technology that supports the governance requirements of generative AI for both your broader business and the safeguarding of your customers.
You may like
Business
How the BPO sector is tackling the surge in fraud across US banking
Source: Finance Derivative
Hans Zachar, Group Chief Information Officer at Nutun
Fraud in the U.S. banking industry is on the rise, driven by the rapid shift towards digital banking by traditional banks coupled with the emergence of neobanks. This trend is not only increasing costs, but also eroding consumer trust and negatively impacting customer experience (CX). According to the latest annual LexisNexis® True Cost of Fraud™ Study: Financial Services and Lending Report — U.S. and Canada Edition, 63% of financial firms reported a fraud increase of at least 6% over the past year, with digital channels contributing to half of all fraud losses.
The study also highlighted the steep financial toll, revealing that for every dollar lost to fraud, North American financial institutions incur $4.41 in total costs. U.S. investment firms and credit lenders have seen the financial impact of fraud rise by 9% year-over-year. Alarmingly, 79% of respondents noted that fraud has also made it harder to earn consumer trust.
The fraudster’s playbook
With the wealth of personal customer data out there, fraudsters are becoming more adept at breaching security verification checks. For example, with customer data showing up in multiple breaches, fraudsters can collate data across sources to build a more complete picture of a person, placing them in a better position to answer knowledge-based authentication questions, often better than the individual.
Despite the increased awareness, there has been a recent shift in modus operandi where criminals impersonate the fraud department from a customer’s bank, asking them to share their one-time pin (OTP). They know your name, address, and credit card digits, and generate an SMS from the bank to get the OTP. With this information, they can access a customer’s account and engage in account origination and transactional fraud.
The situation is worse than ever, with the TransUnion State of Omnichannel Fraud Report for H2 2024 indicating that the sector experienced $3.2 billion in lender exposure to suspected synthetic identities for U.S. auto loans, credit cards, retail credit cards and personal loans at the end of June 2024, which was the highest level ever recorded.
How technology is reshaping fraud landscapes
Technology is aiding and abetting criminals, with artificial intelligence (AI) increasingly used to circumvent multi-factor authentication (MFA). For instance, fraudsters now create deepfakes across voice and video channels to pass biometric authentication. The 2023 Sumsub Identity Fraud Report, revealed a 10-fold increase in the number of deepfakes detected globally across all industries from 2022 to 2023, with a staggering 1740% deepfake surge in North America. The report identified AI-powered fraud, money-muling networks, fake IDs, account takeovers and forced verification as the top risks.
In this regard, Deloitte’s Center for Financial Services predicts that GenAI could enable fraud losses to reach $40 billion in the United States by 2027, up from $12.3 billion in 2023, representing a compound annual growth rate of 32%.
In response, banking institutions are combining a risk-based and data-driven approach to fraud management, leveraging the capabilities of cutting-edge technologies like AI, machine learning (ML) and biometric and behavior-based authentication methods. However, banks need to balance the cost of implementing more effective and stringent fraud risk mitigation and management without compromising customer service and CX. In this regard, many banks are investing in advanced technologies to monitor transactions in real-time and leverage more sophisticated processes to better understand risks at an individual transaction level on an account by better understanding flow and originating IP addresses.
With these insights, the bank can decide what to do with a transaction, either validating it, sending an automated SMS to confirm the action, or diverting the transaction to a customer call or contact center for authentication.
However, despite the technology that banks have in place, the volumes are causing backlogs in the contact centers, which is affecting CX and creating friction in the customer journey. Banks need the capabilities to interact with customers in more efficient and cost-effective ways to tackle the full volume of potentially fraudulent transactions. For these reasons, many banks and lenders are turning to the global Business Processing Outsourcing (BPO) sector to tap into readily available CX and security skills, expertise and technological capabilities.
The importance of BPO banking for financial institutions in the digital era
Banks need a BPO provider that not only has a comprehensive understanding of the financial sector, but also effectively manages costs by utilising the most efficient and budget-friendly methods to engage with customers, focusing on text and voice interactions. After a fraudulent transaction has occurred, banks require a robust system for managing disputes and supporting backend investigations. Banks must track transactions across different regions and time zones since there is no interbank switch available for fraud detection, often relying on human resources to compile transaction details and provide feedback to distressed customers.
To provide compassionate and empathetic support after a fraud case, it is essential to have well-trained agents equipped with real-time information who can guide affected customers through the entire process. A poor experience or a lack of care can significantly impact customer retention rates. However, establishing these capabilities and developing agent expertise within in-house contact centers can be expensive, especially as fraud incidents continue to rise.
Banks that discover a global BPO provider possessing a powerful combination of fraud detection technology, omnichannel engagement features, trained and experienced agents, and fraud investigators will gain significant advantages such as continuous monitoring and industry leading issue resolution. This approach achieves an equal balance between cost-effective and efficient fraud mitigation with high-quality customer service, while adhering to stringent data privacy and regulatory standards.
Business
Using technology to safeguard against fraud this holiday season
Source: Finance Derivative
Tristan Prince, Product Director, Fraud & Financial Crime, Experian
The holiday season brings with it a surge in consumer spending, with UK shoppers expected to part with an impressive £28 billion this year. Unfortunately, this increased activity also draws the attention of cybercriminals looking to exploit vulnerabilities in security systems and personal data.
For financial institutions, the stakes have never been higher. With identity fraud on the rise and new regulations from the Payment Systems Regulator, there is a pressing need to ramp up fraud prevention measures. This season, businesses must leverage innovative technologies to protect their customers and ensure a safe shopping experience.
Fraud is on the rise
In recent years, the prevalence of fraud has reached new levels. Identity fraud alone has seen a 21% increase during the holiday season since 2021, with last year’s figures showing that 83% of all fraud cases were identity-related.
This alarming trend continues in 2024, with a 12.5% increase in identity fraud cases recorded in just the first half of the year. These statistics highlight a troubling reality: fraud is evolving, becoming more sophisticated and harder to detect.
Technology: the key to fighting fraud
Despite these challenges, financial institutions are not powerless. Advanced technology is playing a pivotal role in strengthening defences against fraud. From artificial intelligence (AI) to collaborative data networks, companies now have powerful tools at their disposal to outwit even the most determined criminals.
Artificial intelligence: a game-changer
AI has emerged as a cornerstone in modern fraud prevention strategies. By analyzing massive datasets in real time, AI can quickly identify unusual activity and potential fraud.
Here’s how AI is reshaping fraud detection:
- Real-time monitoring
AI systems continuously monitor transactions, instantly identifying irregular patterns that could indicate fraud. This allows institutions to intervene before any damage is done. - Behavioral insights
By examining customer behaviour, AI can detect deviations from typical spending habits, such as unexpected purchases or login attempts from unusual locations. These insights not only help prevent fraud but also improve the experience for legitimate customers by reducing unnecessary disruptions. - Strengthened identity checks
AI-powered tools verify customer identities by cross-referencing data from various sources, ensuring transactions are carried out by the right individuals while minimizing delays.
Data sharing: strength in unity
In addition to AI, collaborative data sharing between financial institutions is proving to be a powerful weapon against fraud. By pooling insights on fraudulent activities and suspicious trends, companies can create a unified front to tackle threats more effectively.
The benefits of data collaboration:
- Broader visibility: Sharing information helps institutions detect fraud patterns that might otherwise go unnoticed within their own systems.
- Faster action: Real-time data exchange ensures that when one company flags a suspicious transaction, others can respond immediately, preventing further attacks.
Holiday security: a shared responsibility
The fight against fraud is a continuous battle. Although technology has made significant inroads in preventing financial crime, fraudsters are constantly refining their methods. This requires financial institutions to remain agile and invest in the latest innovations.
Encouragingly, advancements in fraud prevention are already yielding results. For example, the financial services sector successfully blocked £710 million worth of unauthorized fraud in the first half of 2024, thanks to cutting-edge solutions like AI and data-sharing networks.
Making the holidays safe for everyone
As the festive season gets underway, businesses must prioritize the safety of their customers. Through strategic use of technology, financial institutions can outpace fraudsters and protect consumers during one of the busiest shopping periods of the year.
By embracing innovation, fostering collaboration, and maintaining vigilance, companies can ensure that shoppers feel secure, and the spirit of the season remains intact. Together, we can make this festive season safer for everyone.
Business
The Evolution of AI in Trading: Building Smarter Partnerships Between Humans and Machines
In these uncertain times where what we are seeing is increasing and perhaps most importantly , unprecedented volatility in the financial markets, it is no surprise that the integration of AI in trading has become a focal point of industry discussion. Today, we’re witnessing a fundamental shift in how traders approach markets against the backdrop of an exponential growth in data complexity.
You get a sense that it’s the same story on trading desks worldwide. One can not deny that the sheer volume and velocity of market-moving information has now surpassed human cognitive capacity. All this means is that we’re at a critical inflection point.
If you look back, it’s clear that ever since the first algorithmic trading systems took seed, we’ve been moving toward this moment. But as with most things in financial technology, the reality is somewhat more nuanced.
The Reality of Real-Time Analysis
Initially, many believed AI would simply replace human traders. But yet perhaps what we need here is some perspective. It is my view that we can expect AI to augment rather than replace human decision-making in trading. Think of it like this – in this scenario, machines will help handle the heavy lifting of data processing and analysis while traders focus on final strategy.
Now, there’s a reason why leading trading houses are investing heavily in AI capabilities and it is simply because successful trading will increasingly rely on human-AI partnerships. At least that’s what our experience with the major trading institutions we work with indicates.
Risk Management in the AI Era
Let’s briefly look at risk management and AI’s capacity for processing vast amounts of market data is nothing short of remarkable. What we’ve found using our own systems in-house is that risk management becomes more proactive when powered by AI. Again and again, we have been seeing how machine learning models can identify potential risks before they materialise, helping a trader to make better trading decisions and spotting new opportunities which may otherwise not have surfaced.
So there it is. The keys to effective risk management lie in combining AI’s processing power with human judgment. And the good news is despite these technological advancements, it can not be overstated just how important human experience remains.
The Evolution of The Human-AI Partnership
In this light, as long as we rely on markets driven by human behaviour, we’ll need human insight. And so, defining what is classed as effective AI integration is becoming vital, as is helping traders to understand both AI’s capabilities and limitations.
From our point of view it has been fascinating to witness the different reactions to embedding AI capabilities in trading – from keen early-adopters willing to take a chance on something new all the way down to dinosaurs prefer to rely on traditional methods and will inevitably be left behind as the race for AI supremacy intensifies.
Increasingly, we’re seeing successful traders embrace AI as a partner rather than a replacement. At the end of the day, markets are complex adaptive systems and those who will win will be those who use AI to enhance human decision-making.
As for the future, one cannot argue against the fact that AI will play an increasingly important role in trading. Even that feels like an understatement. Everywhere you look, trading firms are investing in AI capabilities – some far more quickly and deeply than others – and it’s without a doubt that this trend will continue exponentially.
Author Bio
Wilson Chan is the Founder of Permutable AI, a London-based fintech pioneering AI solutions for financial markets. With roots at Merrill Lynch and Bank of America, he bridges institutional trading expertise with cutting-edge technology. Their latest innovation, the Trading Co-Pilot, delivers real-time event-driven insights for traders, combining geopolitical, macroeconomic, and supply-side data.