Connect with us

Technology

HOW DATA VIRTUALISATION CAN BRING DIGITAL TRANSFORMATION TO BANKING

Source: Finance Derivative

Charles Southwood, Regional VP – Northern Europe and MEA at Denodo

The financial industry is no stranger to disruption and change, but there’s no doubt that right now, it is more disrupted than ever. Indeed, the Capgemini Research Institute’s recent World Retail Banking Report sums up the current banking landscape as volatile, uncertain, complex and ambiguous. Underlying this disruption is the desire and need to digitally transform.

In recent years we’ve seen a boom in the sheer amount of data and personal information that financial services (FS) firms have been collecting and accessing. The data insights available have also become more complete. For example, with COVID19 driving the market to cashless payments for almost everything, the data on our payment patterns is now far more comprehensive. This data has the potential to unlock a raft of new growth opportunities, from increased revenue to enhanced customer services. But it also presents some challenges, which need to be addressed before financial companies can reap the rewards.

Overcoming data hurdles with data virtualisation

The era of open banking is well underway, making digital capabilities the foundation of success and elevating the importance of quality data management architecture. With FinTech start-ups and challengers muscling into the market with digitally native services and disrupting incumbent institutions in the process, there is now a pressing need for financial firms to revolutionise the delivery, integration and utilisation of their own data, harnessing it to drive better business outcomes and customer experiences. It’s never been more important for FS firms to capitalise on their data. But the disparate nature and large fast-changing volumes are making this ever more difficult to achieve. That’s where data virtualisation comes in.

Many of the challenges associated with digital transformation in the FS sector are to do with either establishing or improving the ability to effectively manage data and allowing agents to both access and understand data in order to stay competitive, whilst still protecting their customers from data privacy breaches and complying with shifting industry regulations.

Data virtualisation is a modern approach to data integration. It provides a single, logical view of all data no matter where it originates or resides, which reduces the need to replicate data, and grants financial institutions early, high and accurate data visibility, helping to bring the digital transformation requirement to fruition. Unlike traditional extract, transform and load (ETL) solutions, data virtualisation does not move and copy the data, instead it leaves the data in the source systems. This brings major advantages in agility and timeliness. Rather than replicating, it simply exposes an integrated view of all the data to the data consumers. As business users access and navigate reports, data virtualisation fetches the data in real time from these underlying source systems – delivering speed, agility and accuracy through the seamless connection of data.

The future of finance is data-driven

The amount of data generated from every transaction and interaction is now simply staggering. It’ also not something that is set to change any time soon. In fact, IDC predicts that global data levels will increase to 175 zettabytes by 2025, a rise of 61%. As such, effective data management holds the key for FS firms looking to unlock new, agile ways of working and ultimately to achieving ongoing success.

Against this backdrop, data virtualisation is empowering FS firms to improve the overall performance and efficiency of their operations in a strategic manner, thus reducing costs and shrinking the cycle time for new projects, alongside the ongoing enhancement of business decisions through the provision of granular real-time insights.

Many organisations around the world are already realising these benefits. For example, the Johannesburg Stock Exchange (JSE) has a data landscape made up of over 180 disparate data sources and requires the harmonious operation of over 120 different applications to function successfully and ensure accurate settlement of transactions. Many of these are small in data volumes but of high complexity, making data integration a top priority for the Exchange.

Previously, this data had to be sourced from a diverse array of data systems before being integrated to serve the needs of various business functions. The JSE used traditional batch oriented ETL processes, but with the amount of data flowing through their systems, this was becoming a cumbersome and inefficient way to manage it. In high-speed trading environments, as we know, anything less than 100% accuracy, 100% of the time is not acceptable.

In order to rectify this, the JSE implemented the Denodo platform, with the aim of consolidating its intricate data landscape and aggregating data in real time to build a logical data layer. This platform is a data integration and management solution built on the principles of data virtualisation. All relevant information from the JSE’s various systems is compiled into base views using it. The Exchange has built over 1,700 base views on top of these data sources, which are processed by transformation rules to produce derived and interface views. Now, the JSE’s data integration layer processes about 2 billion rows a month.

As the digital transformation of the financial industry continues to gather speed, banks and other financial institutions must make sure to harness data management architectures such as data virtualisation to enable the insights that help to instil pace and agility of operations to stay ahead of fierce competition. The ability of data virtualisation to generate significant ROI is demonstrable and not to be ignored in an industry context where the margins between success and failure have never been thinner. The message for financial leaders is clear: data virtualisation is key to fulfilling the digital transformation in banking.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Driving business success in today’s data-driven world through data governance

Source: Finance derivative

Andrew Abraham, Global Managing Director, Data Quality, Experian

It’s a well-known fact that we are living through a period of digital transformation, where new technology is revolutionising how we live, learn, and work. However, what this has also led to is a significant increase in data. This data holds immense value, yet many businesses across all sectors struggle to manage it effectively. They often face challenges such as fragmented data silos or lack the expertise and resources to leverage their datasets to the fullest.

As a result, data governance has become an essential topic for executives and industry leaders. In a data-driven world, its importance cannot be overstated. Combine that with governments and regulatory bodies rightly stepping up oversight of the digital world to protect citizens’ private and personal data. This has resulted in businesses also having to comply e with several statutes more accurately and frequently.

We recently conducted some research to gauge businesses’ attitudes toward data governance in today’s economy. The findings are not surprising: 83% of those surveyed acknowledged that data governance should no longer be an afterthought and could give them a strategic advantage. This is especially true for gaining a competitive edge, improving service delivery, and ensuring robust compliance and security measures.

However, the research also showed that businesses face inherent obstacles, including difficulties in integration and scalability and poor data quality, when it comes to managing data effectively and responsibly throughout its lifecycle.

So, what are the three fundamental steps to ensure effective data governance?

Regularly reviewing Data Governance approaches and policies

Understanding your whole data estate, having clarity about who owns the data, and implementing rules to govern its use means being able to assess whether you can operate efficiently and identify where to drive operational improvements. To do that effectively, you need the right data governance framework. Implementing a robust data governance framework will allow businesses to ensure their data is fit for purpose, improves accuracy, and mitigates the detrimental impact of data silos.

The research also found that data governance approaches are typically reviewed annually (46%), with another 47% reviewing it more frequently. Whilst the specific timeframe differs for each business, they should review policies more frequently than annually. Interestingly, 6% of companies surveyed in our research have it under continual review.

Assembling the right team

A strong team is crucial for effective cross-departmental data governance.  

The research identified that almost three-quarters of organisations, particularly in the healthcare industry, are managing data governance in-house. Nearly half of the businesses surveyed had already established dedicated data governance teams to oversee daily operations and mitigate potential security risks.

This strategic investment highlights the proactive approach to enhancing data practices to achieve a competitive edge and improve their financial performance. The emphasis on organisational focus highlights the pivotal role of dedicated teams in upholding data integrity and compliance standards.

Choose data governance investments wisely

With AI changing how businesses are run and being seen as a critical differentiator, nearly three-quarters of our research said data governance is the cornerstone to better AI. Why? Effective data governance is essential for optimising AI capabilities, improving data quality, automated access control, metadata management, data security, and integration.

In addition, almost every business surveyed said it will invest in its data governance approaches in the next two years. This includes investing in high-quality technologies and tools and improving data literacy and skills internally.  

Regarding automation, the research showed that under half currently use automated tools or technologies for data governance; 48% are exploring options, and 15% said they have no plans.

This shows us a clear appetite for data governance investment, particularly in automated tools and new technologies. These investments also reflect a proactive stance in adapting to technological changes and ensuring robust data management practices that support innovation and sustainable growth.

Looking ahead

Ultimately, the research showed that 86% of businesses recognised the growing importance of data governance over the next five years. This indicates that effective data governance will only increase its importance in navigating digital transformation and regulatory demands.

This means businesses must address challenges like integrating governance into operations, improving data quality, ensuring scalability, and keeping pace with evolving technology to mitigate risks such as compliance failures, security breaches, and data integrity issues.

Embracing automation will also streamline data governance processes, allowing organisations to enhance compliance, strengthen security measures, and boost operational efficiency. By investing strategically in these areas, businesses can gain a competitive advantage, thrive in a data-driven landscape, and effectively manage emerging risks.

Continue Reading

Technology

‘Aligning AI expectations with AI reality’

By Nishant Kumar Behl, Director of Emerging Technologies at OneAdvanced

AI is transforming the way we work now and will continue to make great strides into the future. In many of its forms, it demonstrates exceptional accuracy and a high rate of correct responses. Some people worry that AI is too powerful, with the potential to cause havoc on our socio-political and economic systems. There is a converse narrative, too, that highlights some of the surprising and often comical mistakes that AI can produce, perhaps with the intention of undermining people’s faith in this emerging technology.

This tendency to scrutinise the occasional AI mishap despite its frequent correct responses overshadows the technology’s overall reliability, creating an unfairly high expectation for perfection. With a singular focus on failure, it is, therefore, no surprise that almost 80% of AI projects fail within a year. Considering all of the hype around AI and particularly GenAI over the past few years, it is understandable that users feel short-changed when their extravagant expectations are not met.

We shouldn’t forget that a lot of the most useful software we all rely on in our daily working lives contains bugs. They are an inevitable and completely normal byproduct of developing and writing code. Take a look at the internet, awash with comments, forums, and advice pages to help users deal with bugs in commonly used Apple and Microsoft word processing and spreadsheet apps.

If we can accept blips in our workhorse applications, why are we holding AI to such a high standard? Fear plays a part here. Some may fear AI can do our jobs to a much higher standard than we can, sidelining us. No technology is smarter than humans. As technology gets smarter, it pushes humans to become smarter. When we collaborate with AI, the inputs of humans and artificial intelligence work together, and that’s when magic happens.

AI frees up more human time and lets us be creative, focusing on more fulfilling tasks while the technology does the heavy lifting. But AI is built by humans and will continue to need people asking the right questions and making connections based on our unique human sensibility and perception if it is to become more accurate, useful, and better serve our purpose.

The fear of failing to master AI implementation might be quite overwhelming for organisations. In some cases, people are correct in being cautious. There is a tendency now to expect all technology solutions to have integrated AI functionality for the sake of it, which is misguided. Before deciding on any technology, users must first identify and understand the problem they are trying to solve and establish whether AI is indeed the best solution. Don’t be blinded by science and adopt the whistles and bells that aren’t going to deliver the best results.

Uncertainty and doubt will continue to revolve around the subject of AI, but people should be reassured that there are many reliable, ethical technology providers developing safe, responsible, compliant AI-powered products. These organisations recognise their responsibility to develop products that offer long-term value rather than generating temporary buzz. By directly engaging with customers to understand their needs and problems, a customer-focused approach helps identify whether AI can effectively address the issues at hand before proceeding down the AI route.

In any organisation, the leader’s job is to develop strategy, ask the right questions, provide direction, and often devise action plans. When it comes to AI, we will all need to adopt that leadership mindset in the future, ensuring we are developing the right strategy, asking insightful questions, and devising an effective action plan that enables the engineers to execute appropriate AI solutions for our needs.

Organisations should not be afraid to experiment with AI solutions and tools, remembering that in every successful innovation, there will be some failure and frustration. The light bulb moments rarely happen overnight, and we must all adjust our expectations so that AI can offer a perfect solution. There will be bugs and problems, but the journey towards improvement will result in achieving long-term and sustainable value from AI, where everyone can benefit.

====

Nishant Kumar Behl is Director of Emerging Technologies at OneAdvanced, a leading provider of sector-focussed SaaS software, headquartered in the UK.

Continue Reading

Business

Machine Learning Interpretability for Enhanced Cyber-Threat Attribution

Source: Finance Derivative

By: Dr. Farshad Badie,  Dean of the Faculty of Computer Science and Informatics, Berlin School of Business and Innovation

This editorial explores the crucial role of machine learning (ML) in cyber-threat attribution (CTA) and emphasises the importance of interpretable models for effective attribution.

The Challenge of Cyber-Threat Attribution

Identifying the source of cyberattacks is a complex task due to the tactics employed by threat actors, including:

  • Routing attacks through proxies: Attackers hide their identities by using intermediary servers.
  • Planting false flags: Misleading information is used to divert investigators towards the wrong culprit.
  • Adapting tactics: Threat actors constantly modify their methods to evade detection.

These challenges necessitate accurate and actionable attribution for:

  • Enhanced cybersecurity defences: Understanding attacker strategies enables proactive defence mechanisms.
  • Effective incident response: Swift attribution facilitates containment, damage minimisation, and speedy recovery.
  • Establishing accountability: Identifying attackers deters malicious activities and upholds international norms.

Machine Learning to the Rescue

Traditional machine learning models have laid the foundation, but the evolving cyber threat landscape demands more sophisticated approaches. Deep learning and artificial neural networks hold promise for uncovering hidden patterns and anomalies. However, a key consideration is interpretability.

The Power of Interpretability

Effective attribution requires models that not only deliver precise results but also make them understandable to cybersecurity experts. Interpretability ensures:

  • Transparency: Attribution decisions are not shrouded in complexity but are clear and actionable.
  • Actionable intelligence: Experts can not only detect threats but also understand the “why” behind them.
  • Improved defences: Insights gained from interpretable models inform future defence strategies.

Finding the Right Balance

The ideal model balances accuracy and interpretability. A highly accurate but opaque model hinders understanding, while a readily interpretable but less accurate model provides limited value. Selecting the appropriate model depends on the specific needs of each attribution case.

Interpretability Techniques

Several techniques enhance the interpretability of ML models for cyber-threat attribution:

  • Feature Importance Analysis: Identifies the input data aspects most influential in the model’s decisions, allowing experts to prioritise investigations.
  • Local Interpretability: Explains the model’s predictions for individual instances, revealing why a specific attribution was made.
  • Rule-based Models: Provide clear guidelines for determining the source of cyber threats, promoting transparency and easy understanding.

Challenges and the Path Forward

The lack of transparency in complex ML models hinders their practical application. Explainable AI, a field dedicated to making models more transparent, holds the key to fostering trust and collaboration between human and machine learning. Researchers are continuously refining interpretability techniques, with the ultimate goal being a balance between model power and decision-making transparency.

Continue Reading

Copyright © 2021 Futures Parity.