Connect with us

Business

Less than a year until EMIR Refit: how can firms prepare? 

Source: Finance Derivative

Leo Labeis, CEO at REGnosys, discusses everything that financial institutions need to know about EMIR Refit and how they can prepare with Digital Regulatory Reporting (DRR).

  There is now less than a year until the implementation date for the much-anticipated changes to the European Markets Infrastructure Regulation (EMIR). The amendments, which are set to go live on 29 April 2024, represent an important landmark in establishing a more globally harmonised approach to trade reporting.   

  Despite the fast-approaching deadline, concerns are growing around the industry’s preparedness, with a recent survey from Novatus Advisory finding that 40% of UK firms have no plans in place for the changes, for instance.  

  Much of the focus in 2022 was on implementation efforts for the rewrite of the Commodity Futures Trading Commission’s swaps reporting requirements (CFTC Rewrite), which went live on 5 December. Both the CFTC Rewrite and EMIR Refit are part of the same drive to standardise trade reporting globally. While EMIR Refit was originally anticipated to roll out first, implementation suffered from repeated delays to its technical specifications, in particular the new ISO 20022 format. The ISO 20022 mandate was eventually excluded from the first phase of the CFTC Rewrite, hence the earlier go-live date. 

Leo Labeis

  In parallel, the Digital Regulatory Reporting (DRR) programme has emerged as a key driving force in helping firms adapt to continually evolving reporting requirements. Having participated in the DRR build-up for their CFTC Rewrite preparations, how can firms leverage these efforts to comply with EMIR Refit in 2024?  

  

The drive to standardise post-trade 

  To understand the new EMIR requirements, it is important to first look at the two main pillars in the global push to greater reporting harmonisation.  

  The first is the Committee on Payments & Market Infrastructures and International Organization of Securities Commission’s (CPMI-IOSCO) Critical Data Elements (CDE), which were first published in 2018 to work alongside other common standards including the Unique Product Identifier (UPI) and Unique Trade Identifier (UTI). These provide harmonised definitions of data elements for authorities to use when monitoring over the counter (OTC) derivative transactions, allowing for improved transparency on the contents of the transaction and greater scope for the interchange of data across jurisdictions.  

  The second is the mandating of ISO 20022 as the internationally recognised format for reporting transaction data. Historically, trade repositories required firms to submit data in a specific format that they determined, before applying their own data transformation for consumption by the regulators. The adoption of ISO 20022 under the new EMIR requirements changes that process by shifting the responsibility from trade repositories to the reporting firm, with the aim of enhancing data quality and consistency by reducing the need for data processing.  

  

Preparing for the new requirements with DRR 

  DRR is an industry-wide initiative to enable firms to interpret and implement reporting rules consistently and cost-effectively. Under the current process, reporting firms create their own reporting solution, inevitably resulting in inconsistencies and duplication of costs. DRR changes this by allowing market participants to work together to develop a standardised interpretation of the regulation and store it in a digital, openly accessible format.  

  Importantly, firms which are using the rewritten CFTC rules which have been encoded in DRR will not have to build EMIR Refit from scratch. ISDA estimates that 70% of the requirements are identical across both regulations, meaning firms can leverage their work in each area and adopt a truly global strategy. DRR has already developed a library of CDE rules for the CFTC Rewrite, which can be directly re-applied to EMIR Refit. Even when those rules are applied differently between regimes, the jurisdiction-specific requirements can be encoded as variations on top of the existing CDE rule rather than in silo.  

  Notably the UPI, having been excluded from the first phase of the CFTC Rewrite roll-out, is mandated for the second phase due in January 2024. DRR will integrate this requirement, as well as others such as ISO 20022, and develop a common solution that can be applied across the CFTC Rewrite and EMIR Refit.  

  As firms begin their own build, the industry should work together in reviewing, testing and implementing the DRR model. Maintaining the commitment of all DRR participants will strengthen the community-driven approach to building this reporting ‘best practice’ and serve as a template for future collaborative efforts.  

  

Planning for the long-term  

 Although the recent CFTC Rewrite and next year’s EMIR Refit are centre of focus for many firms, several more G20 regulatory reporting reforms are expected over the next few years. These include rewrites to the Australian Securities and Investments Commission (ASIC), Monetary Authority of Singapore (MAS) and Hong Kong Monetary Authority (HKMA) derivatives reporting regimes, amongst others.   

  Firms should therefore plan for the entire global regulatory reform agenda rather than prepare for each reform separately. Every dollar invested in reporting and data management will go further precisely because it is going to be spread across jurisdictions, easing budget constraints.  

  Looking ahead, financial institutions should establish a broad and long-term plan is to learn from their CFTC Rewrite preparation and how DRR can be positioned in their implementation. For example, firms should ask themselves which approach to testing and implementing DRR works best: via their own internal systems or through a third-party? Firms should review what worked well in their CFTC Rewrite implementation and apply successful methods to EMIR Refit. Doing so will enable firms to have a strong foundation for future updates in the years to come. 

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

The generative AI revolution is here – but is your cloud network ready to embrace it?

Paul Gampe, Chief Technology Officer, Console Connect

Generative Artificial Intelligence is inserting itself into nearly every sector of the global economy as well as many aspects of our lives. People are already using this groundbreaking technology to query their bank bills, request medical prescriptions, and even write poems and university essays.

In the process, generative AI has the potential to unlock trillions of dollars in value for businesses and radically transform the way we work. In fact, current predictions suggest generative AI could automate up to 70 percent of employees’ time today.

Paul Gampe

But regardless of the application or industry, the impact of generative AI can be most keenly felt in the cloud computing ecosystem.

As companies rush to leverage this technology in their cloud operations, it is essential to first understand the network connectivity requirements – and the risks – before deploying generative AI models safely, securely, and responsibly.

Data processing

One of the primary connectivity requirements for training generative AI models in public cloud environments is affordable access the scale of datasets. By their very definition, large language models (LLM) are extremely large. To train these LLMs requires vast amounts of data and hyper-fast compute      and the larger the dataset the more the demand for computing power.

The enormous processing power required to train these LLMs is only one part of the jigsaw. You also need to manage the sovereignty, security, and privacy requirements of the data transiting in your public cloud. Given that 39 percent of businesses experienced a data breach in their cloud environment in 2022, it makes sense to explore the private connectivity products on the market which have been designed specifically for high performance and AI workloads.

Regulatory trends

Companies should pay close attention to the key public policies and regulation trends which are rapidly emerging around the AI landscape. Think of a large multinational bank in New York that has 50 mainframes on its premises where they keep their primary computing capacity; they want to do AI analysis on that data, but they cannot use the public internet to connect to these cloud environments because many of their workloads have regulatory constraints. Instead, private connectivity affords them the ability to get to where the generative AI capability exists and sits within the regulatory frameworks of their financing industry.

Even so, the maze of regulatory frameworks globally is very complex and subject to change. The developing mandates of the General Data Protection Regulation (GDPR) in Europe, as well as new GDPR-inspired data privacy laws in the United States, have taken a privacy-by-design approach whereby companies must implement techniques such as data mapping and data loss prevention to make sure they know where all personal data is at all times and protect it accordingly.

Sovereign borders

As the world becomes more digitally interconnected, the widespread adoption of generative AI technology will likely create long-lasting challenges around data sovereignty. This has already prompted nations to define and regulate their own legislation regarding where data can be stored, and where the LLMs processing that data can be housed.

Some national laws require certain data to remain within the country’s borders, but this does not necessarily make it more secure. For instance, if your company uses the public internet to transfer customer data to and from London on a public cloud service, even though it may be travelling within London, somebody can still intercept that data and route it elsewhere around the world.

As AI legislation continues to expand, the only way your company will have assurance of maintaining your sovereign border may be to use a form of private connectivity while the data is in transit. The same applies to AI training models on the public cloud; companies will need some type of connectivity from their private cloud to their public cloud where they do their AI training models, and then use that private connectivity to bring their inference models back.

Latency and network congestion
Latency is a critical factor in terms of interactions with people. We have all become latency sensitive, especially with the volume of voice and video calls that we experience daily, but the massive datasets used for training AI models can lead serious latency issues on the public cloud.

For instance, if you’re chatting with an AI bot that’s providing you customer service and latency begins to exceed 10 seconds, the dropout rate accelerates. Therefore, using the public internet to connect your customer-facing infrastructure with your inference models is potentially hazardous for a seamless online experience, and a change in response time could impact your ability to provide meaningful results.

Network congestion, meanwhile, could impact your ability to build models on time. If you have significant congestion in getting your fresh data into your LLMs it’s going to start to backlog, and you won’t be able to achieve the learning outcomes that you’re hoping for. The way to overcome this is by having large pipes to ensure that you don’t encounter congestion in moving your primary data sets into where you’re training your language model.

Responsible governance

One thing everybody is talking about right now is governance. In other words, who gets access to the data and where is the traceability of the approval of that data available?

Without proper AI governance, there could be high consequences for companies that may result in commercial and reputational damage. A lack of supervision when implementing generative AI models on the cloud could easily lead to errors and violations, not to mention the potential exposure of customer data and other proprietary information. Simply put, the trustworthiness of generative AI all depends on how companies use it.

Examine your cloud architecture

Generative AI is a transformative field with untold opportunities for countless businesses, but IT leaders cannot afford to get their network connectivity wrong before deploying its applications.

Remember, data accessibility is everything when it comes to generative AI, so it is essential to define your business needs in relation to your existing cloud architecture. Rather than navigating the risks of the public cloud, the high-performance flexibility of a Network-as-a-Service (NaaS) platform can provide forward-thinking companies with a first-mover advantage.

The agility of NaaS connectivity makes it simpler and safer to adopt AI systems by interconnecting your clouds with a global network infrastructure that delivers fully automated switching and routing on demand. What’s more, a NaaS solution also incorporates the emerging network technology that supports the governance requirements of generative AI for both your broader business and the safeguarding of your customers.

Continue Reading

Business

How tech can tackle the manufacturing skills shortage

By Mikko Urho, CEO, Visual Components

In modern times, manufacturers are unable to call upon a constant supply of readily available workers. In fact, the skills shortfall is at its most severe level since 1989 in the UK. A perfect storm of factors such as the cost of living crisis, Brexit, the pandemic, continued economic instability and shifting age demographics have exacerbated the issue.

Now, over three-quarters (77%) of employers are struggling to fill available roles. But without these skills, firms will be fully hindered in their ability to commission, design and optimise their production systems, which includes any robotic technology they bring in. What actions must organisations take now to prevent their manufacturing lines from being disrupted, or even worse, witnessing a full shutdown?

Helping under-fire teams

As talent pipelines diminish, manufacturers must explore other ways of addressing the growing skills gap. Technology holds promise. Robots can undertake a range of different functions that previously fell under the responsibility of staff. But unlike humans, robots don’t tire throughout the day and subsequently the risk of mistakes is much lower. It’s also harder for humans to replicate exactly the same level of accuracy when completing a manual task many times. Modern-day robot deployments can complete welding, cutting, painting and other processes with ease

However, for robots to fully handle these tasks for humans, they have to be manually programmed. In a survey of manufacturing decision-makers in the UK undertaken by Visual Components, over half (55%) state that manual programming is a necessity to complete welding, cutting painting and other tasks. This requires a specific human skill set and demands considerable time from the people involved.

Over a third (35%) of manufacturers say that the manual process takes between a week and a month, leaving robots completely idle before they can provide value. It might be even longer if it needs to be replicated across a number of robots from different providers. How can manufacturers set their robots to task straight away?

Building new skills

Robot offline programming (OLP) brings the robot and its work cell into the digital environment. In an intuitive simulated interface, movements and workflows are accurately replicated. Full testing can take place in a sandbox environment before anything is deployed in the real world. Common programming issues around collision avoidance and joint-limit violations can be fully avoided.

OLP provides a number of advantages to manufacturers. Instead of a much slower sequential process to programming and deployment thereafter, concurrent planning allows these two processes to take place at the same time. The software is able to identify different features in a workpiece or specific component, including pockets and holes, and incorporate this into a programming procedure. Even more crucially, its straightforward interface means that employees can easily upskill in the programming of robots, effectively plugging the skills gap.

It’s a logical and intuitive solution that can encourage novice users or new recruits to get up to speed. There’s even an opportunity for them to learn how to deploy different robot brands, with functionality across all the major providers. This further broadens the knowledge of staff and prepares them for future integrations.

Many businesses are also adopting remote working practices, and OLP can be incorporated to suit this strategy. Staff can access the system from anywhere, preventing them from needing to be on-site. Not only do manufacturers tackle staff shortages, but can encourage greener practices with dispersed workforces. And lastly, the technology futureproofs the business against employee departures. With all knowledge stored safely within the software, organisations also protect themselves from the risk of skilled staff leaving or retiring, where they would otherwise take their expertise with them. 

Grasping the opportunities

The skills crisis is a significant challenge for UK manufacturers, but it also opens doors for innovation. As various socioeconomic factors intensify worker shortages, manufacturers need to adopt proactive measures to sustain productivity and competitiveness. Leveraging technology, especially through the implementation of robotics and OLP, offers a practical solution to address the skills gap.

OLP improves the efficiency and precision of robotic tasks and provides valuable upskilling opportunities for the workforce. With user-friendly software, even those new to the field can develop their skills and integrate robots into the production line, avoiding the costs and time associated with traditional methods.

While manufacturers may have limited control over the supply of highly skilled workers, they can certainly harness technology to empower their existing employees and drive transformation from within. Embracing these technological advancements mitigates the impact of the skills shortage and crucially positions manufacturers for future growth and innovation.

Continue Reading

Business

How businesses stand to lose more than they save with radical cost cutting

Source: Finance Derivative

Spokesperson: Benjamin Swails, Northern Europe General Manager

For years, my career was focussed on the next big conference, the customer meeting that required a flight and hotel stay, or the big customer dinner where the right bottle of wine really mattered. Since becoming the General Manager of Pleo’s Northern European business, my remit has expanded to understanding how much money we have coming in versus going out. Today, I’m asking whether my teams travel to travel, or because it’s necessary? What are we spending on the tools and applications required to do the job and what is the ROI? How many coffees is my team expensing every day? To some this might seem like overkill, but these details matter to me in 2024. And they should matter for you too.

That’s because, ahead of what’s expected to be a challenging year for UK business, a quarter of small and medium-sized enterprises (SMEs) are looking to reduce business spending in 2024. This is according to Pleo’s CFO Playbook for 2024, which polled over 500 UK financial decision makers. But, when it comes to where these spending cuts will manifest most strongly, 1 in 5 UK businesses are exploring reducing pay for remote workers – a decision that has the potential to impact 16% of full-time British workers. With just under half (41%) of businesses asking their teams to come into the office more, it’s obvious that business leaders are keen to bring back in-person collaboration and make the most of costly office rents. But is reducing pay for remote workers really the answer?

Before they sign off on spending decisions that can have potentially damaging ramifications for employee morale, businesses must first bring some clarity to their spending oversight and find the balance between a leaner business and one that still operates a flexible culture. This means having a tighter rein on spending – including deeper insights and fewer spending blind spots – to reduce the need for radical cost-cutting strategies. Because in 2024, details matter.

Why there is a need to reduce spending

The past few years have undoubtedly been a challenge for UK SMEs. In late 2023, for the first time in over a decade, more businesses were closing down than starting up. Fast forward and 2024 has kicked off with similar uncertainty. Encouraging EY forecasts expect the UK economy to grow 0.9% this year, up from the 0.7% growth projected in October’s Autumn Forecast – while GDP growth expectations for 2025 have been upgraded from 1.7% to 1.8%. But, less than a month on, the UK finds itself in a recession.

This has increased the pressure on organisations to reduce spending for the year ahead. However, only a third (34%) of UK businesses feel they’ve got an excellent grip on managing their spending, and just 28% feel they have strong visibility of their financial health and performance. Yet, curiously, almost 50% of UK businesses believe 2024 will be “easier” than 2023. Something that, in light of the challenges businesses face and the lack of significant investment into spending visibility and performance,  is hard not to interpret as wishful thinking. And businesses risk flying blind in their quest to cut costs without comprehensive spending oversight to navigate them.

Cost cutting shouldn’t be a Hail Mary

Let’s use the notion of reduced pay for remote workers as a case study for making spending decisions without spending oversight. Renewed calls for workers to return to the office is one thing, but this feels like more of a financial misfire that declares the contribution of remote workers less valuable. Pleo is currently thinking about the role of its own office space. But, what’s crucial is that we don’t plan on putting financial pressure on those who prefer to work from home. Instead, we’re thinking bigger and evaluating our office needs for all London-based staff. This ensures we can save money on rent, not people, before investing it into amenities our team wants.

Many of our employees are still working remotely and while, in a perfect world, I would love to see 80% of our team come into the office to help contribute to the culture that makes Pleo so special, we need to strike a balance of office requirement and productivity preferences, and keep our culture intact as we do so. Ultimately all of our employees need to feel valued.

As businesses strive to streamline their spending, the decisions made at the collective level are likely to impact individuals most – from work models and colleagues to pay and progression. And so before making such drastic spending cuts, businesses need to ask themselves how they can manage spending better. Not with broad strokes, but by looking at the detail. And this starts with more comprehensive spending oversight across multiple departments and activities.

Where to start with cost consolidation

Though streamlining costs might present some businesses with a significant shift, it is worth the effort. Better spend management offers an opportunity to truly unlock enhanced efficiency and resilience.

One area of opportunity that’s set to become more key in 2024 is addressing technology investments and tool consolidation. We know that digital transformation is well underway for many businesses, yet consolidating platforms and software is languishing towards the bottom of the priority list. Only 16% in the UK see it as a big ambition for 2024 – something they might want to reconsider considering the average worker is overburdened across 9 tools every day. Such ‘digital overwhelm’ is not only a concern for the workforce and productivity, but budget too.

Another opportunity for consolidation isn’t necessarily about cost, but mindset. Too often, businesses conceive of spend and expenses as two separate things. The former more likely to be high-value items such as office rent, ad spend and international business travel; the latter more likely to be smaller cost items like coffees, office supplies and local travel costs. In fact, despite only 19% of businesses thinking of expenses and spend as the same thing, only 27% of organisations had clear guidelines on what separates them – potentially opening up a black hole in terms of unaccounted outgoings.

At the end of the day, businesses just want to know how much they have coming in vs going out. Whether it’s an expense or spend, it’s all outgoing. And when 25% of decision makers say they use different platforms, this fractured view of company outgoings is allowing a lot to slip through the cracks.

The priority of pocket repair

There is no doubt that UK businesses face a challenging 12 months ahead. In order to focus on revenue growth and filling their pockets in the coming months, business leaders first need to check there aren’t any holes in them. This means ensuring their spending oversight is exhaustive and leaves no stone unturned – and no finance strategy half-baked.

This is how businesses can reduce business spending and, crucially, avoid doing so as part of a trade-off with working culture and productivity. Because without financial oversight and strategy, ill-conceived cost cutting will remain a bigger risk and could potentially end up costing business leaders in more ways than one.

Continue Reading

Copyright © 2021 Futures Parity.