Connect with us

Business

How Turning Your Core Data into a Product Drives Business Impact

By Venki Subramanian, SVP of Product Management at Reltio

Data drives efficiencies, improves customer experience, enables companies to identify and manage risks, and helps everyone from human resources to sales make informed decisions. It is the lifeblood of most organisations today. Sometime during the last few years, however, organisations turned a corner from embracing data to fearing it as the volume spiralled out of control. By 2025, for example, it is estimated that the world will produce 463 exabytes of data daily compared to 3 exabytes a decade ago.

Too much enterprise data is locked up, inaccessible, and tucked away inside monolithic, centralised data lakes, lake houses, and warehouses. Since almost every aspect of a business relies on data to make decisions, accessing high-quality data promptly and consistently is crucial for success. But finding it and putting it to use is often easier said than done.

That’s why many organisations are turning to “distributed data” and creating “data products” to solve these challenges, especially for core data, which is any business’s most valuable data asset. Core data or master data refers to the foundational datasets that are used by most business processes and fall into four major categories – organisations, people (individuals), locations, and products. A data product is a reusable dataset used by analysts or business users for specific needs. Most organisations are undergoing massive digital and cloud transformations. Putting high-quality core data at the centre of these transformations—and treating it as a product can yield a significant return on investment.

The Inefficiency of Monolithic Data Architectures

Customer data is one example of core or master data that firms rely on to generate outstanding customer experiences and accelerate growth by providing better products and services to consumers. However, leveraging core customer data becomes extremely challenging without timely, efficient access. The data is often trapped inside monolithic, centralised data storage systems. This can result in incomplete, inaccurate, or duplicative information. Once hailed as the saviour to the data storage and management challenge, monolithic systems escalate these problems as the volume of data expands and the urgent need for making data-driven decisions rises.

The traditional approaches for addressing data challenges entail extracting the data from the system of records and moving it to different data platforms, such as operational data stores, data lakes, or data warehouses, before generating use case-specific views or data sets. In addition, because of the creation of use case-specific data sets that are subsequently exploited by use case-specific technologies, the overall inefficiency of this process increases.

One inefficiency arises from the complexity of such a landscape, which involves the movement of data from many sources to various data platforms, the creation of use case-specific data sets, and the use of multiple technologies for consumption. Core data for each domain, such as customer, is duplicated and reworked or repackaged for almost every use case instead of producing a consistent representation of the data used across various use cases and consumption models – analytical, operational, and real-time.

There’s also a disconnect between data ownership and the subject matter experts that need it for decision-making. Data stewards and scientists understand how to access data, move it around and create models. But they’re often unfamiliar with the specific use cases in the business. In other words, they’re experts in data modelling, not finance, human resources, sales, product management, or marketing. They’re not domain experts and may not understand the information needed for specific use cases, leading to frustration and data going unused. It’s estimated, for example, that 20% or fewer of data models created by data scientists are deployed.

Distributed Data Architecture – An Elegant Solution to a Messy Problem

The broken promises of monolithic, centralised data storage have led to the emergence of a new approach called “distributed” data architectures, such as data fabric and data mesh. A data mesh can create a pipeline of domain-specific data sets, including core data, and deliver it promptly from its source to consuming systems, subject matter experts, and end users.

These data architectures have arisen as a viable solution for the issues created by inaccessible data locked away in siloed systems or rigid monolithic data architectures of the past. Data fabric decentralises the management and governance of data sets. It follows four core principles – domain ownership of data, treating data as a product and applying product principles to data, enabling a self-serve data infrastructure, and ensuring federated governance. These help data product owners create data products based on the needs of various data consumers and for data consumers to learn what data products are available and how to access and use these. Data quality, observability, and self-service capabilities for discovering data and metadata are built into these data products.

The rise of the concept of data products is helpful for analytics/artificial intelligence, and general business uses. The concept for either case is the same – the dataset can be reused without a major investment in time or resources. It can dramatically reduce the amount of time spent finding and fixing data. Data products can also be updated regularly, keeping them fresh and relevant. Some legacy companies have reported increased revenues or cost savings of over $100 million.

Trusted, Mastered Data as a Product

Data product owners have to create data products for core data to enable its activation for key initiatives and support various consumption models in a self-serve manner. The typical pattern that all these data pipelines enable can be summarised into the following three stages – collect, unify, and activate.

The process starts with identifying the core data sets – data domains like customer or product – and defining a unified data model for these. Then, data product owners need to identify the first-party data sources and the critical third-party data sets used to enrich the data. This data is assembled, unified, enriched, and provided to various consumers via APIs so that the data can be activated for various initiatives. Product principles such as the ability to consume these data products in a self-service manner, customise the base product for various usage scenarios, and deliver regular enhancements to the data are built into such data products.

Data product owners can use this framework to map out key company initiatives, identify the most critical data domains, identify the features (data attributes, relationships, etc.) and the sources of data – first and third party that needs to be assembled – to create a roadmap of data products and align them to business impact and value delivered.

With data coming from potentially hundreds of applications and the constantly evolving requirements of data consumers, poor quality data and slow and rigid architecture can cost companies in many ways, from lost business opportunities to regulatory fines to reputational risk from poor customer experience. That’s why organisations of all sizes and types need a modern, cloud-based master data management approach that can enable the creation of core data as products. A cloud-based MDM can reconcile data from hundreds of first and third-party sources and create a single trusted source of truth for an entire organisation. Treating core data as a product can help businesses drive value by treating it as a strategic asset and unlocking its immense potential to drive business impact.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

What can the West learn from the Arabian Gulf’s payments revolution?

Hassan Zebdeh, Financial Crime Advisor at Eastnets

A decade ago, paying for coffee at a small café in Riyadh meant fumbling with cash – or, at best, handing over a plastic card. Today, locals casually wave smartphones over terminals, instantly settling the bill, splitting it among friends, and even transferring money abroad before their drink cools.

This seemingly trivial scene illustrates a profound truth: while the West debates incremental upgrades to ageing payment systems, the Arabian Gulf has leapfrogged straight into the future. As of late 2024, Saudi Arabia achieved a remarkable 98% adoption rate for contactless payments in face-to-face transactions, a significant leap from just 4% in 2017.

Align financial transformation with a bold national vision

One milestone that exemplifies the Gulf’s approach is Saudi Arabia’s launch of its first Swift Service Bureau. While not the first SSB worldwide, its presence in the Kingdom underscores a broader theme: rather than rely on piecemeal upgrades to older infrastructure, Saudi Arabia chose a proven yet modern route, aligned to Vision 2030, to unify international payment standards, enhance security, and reduce operational overhead.

And it matters, because in a region heavily reliant on expatriate workers whose steady stream of remittances powers whole economies. The stakes for frictionless cross-border transactions are unusually high. Rather than tinkering around the edges of an ageing system, Saudi Arabia opted for a bold and coherent solution, deliberately aligning national pride and purpose with practical financial innovation. It’s a reminder that infrastructure, at its best, doesn’t merely enable transactions; it reshapes how people imagine the future.

Make regulation a launchpad, not a bottleneck

Regulation often carries the reputation of an overprotective parent – necessary, perhaps, but tiresome, cautious to a fault, and prone to slowing progress rather than enabling it. It’s the bureaucratic equivalent of wrapping every new idea in bubble wrap and paperwork. Yet Bahrain has managed something rare: flipping the narrative entirely. Instead of acting solely as gatekeepers, Bahraini regulators decided to become collaborators. Their fintech sandbox isn’t merely a regulatory innovation; it’s psychological brilliance, transforming a potentially adversarial relationship into a partnership

Within this curated environment, fintech firms have launched practical experiments with striking results. Take Tarabut Gateway, which pioneered open banking APIs, reshaping how banks and customers interact. Rain, a cryptocurrency exchange, tested compliance frameworks safely, quickly becoming one of the Gulf’s trusted crypto players. Elsewhere, startups trialled AI-driven identity verification and seamless cross-border payments, all under the watchful yet adaptive guidance of Bahraini regulators. Successes were rapidly scaled; failures offered immediate lessons, free from damaging legal fallout. Bahrain proves regulation, thoughtfully applied, can genuinely empower innovation rather than restrict it.

Prioritise cross-border interoperability and unified standards

Cross-border payments have long been a maddening puzzle – expensive, sluggish, and unpredictably complicated. Most Western banks seem resigned to this reality, treating the spaghetti-like mess of correspondent banking relationships as a necessary evil. Yet Gulf states looked at this same complexity and saw not just inconvenience, but opportunity. Instead of battling against the tide, they cleverly redirected it, embracing standards like ISO 20022, which neatly streamline data exchange and slash friction from global transactions.

Examples abound: Saudi Arabia’s adoption of ISO 20022 through its Swift Service Bureau will notably accelerated cross-border transactions and improve transparency. The UAE and Saudi Arabia also jointly piloted Project Aber, a digital currency initiative that significantly reduced settlement times for interbank payments. Similarly, Bahrain’s collaboration with fintechs has simplified previously burdensome remittance processes, reducing both cost and complexity.

Target digital ecosystems for financial inclusion

One of the most intriguing elements of the Gulf’s payments transformation is the speed and enthusiasm with which consumers embraced new technologies. In Bahrain, mobile wallet payments surged by 196% in 2021, contributing to a nearly 50% year-over-year increase in digital payment volumes. Similarly, Saudi Arabia experienced a near tripling of mobile payment volumes in the same year, with mobile transactions accounting for 35% of all payments. 

The West, by contrast, still struggles with financial inclusion. In the U.S., millions remain unbanked or underbanked, held back by distrust, geographic isolation, and high fees. Digital solutions exist, but widespread adoption has lagged, partly because major institutions view inclusion as a long-term aspiration rather than an immediate priority. The Gulf shows that when digital tools are made integral to daily life, rather than optional extras, the barriers to financial inclusion quickly dissolve.

The road ahead

As the Gulf region continues to refine its payment systems experimenting with digital currencies, advanced data protection laws, and AI-driven compliance the ripple effects will be felt far beyond the GCC. Western players can treat these developments as an external threat or as a chance to rejuvenate their own approaches.

Ultimately, if you want a glimpse of where financial services may be headed towards integrated platforms, real-time international transactions, and widespread digital inclusion – the Gulf experience is a prime example of what’s possible. The question is whether other markets will step up, follow suit, and even surpass these achievements. With global financial landscapes evolving at record speed, hesitation carries its own risks. The Arabian Gulf has shown that bold bets can pay off; perhaps that’s the most enduring lesson for the West.

Continue Reading

Business

Unlocking business growth with efficient finance operations

Rob Israch, President at Tipalti

The UK economy has faced a turbulent couple of years, meaning now more than ever, businesses need to stay agile. With Reeves’s national insurance hikes now fully in play and global trade tensions casting a shadow over the landscape, the coming months will present a crucial opportunity for businesses to decide how to best move forward. 

That said, it’s not all doom and gloom. The latest official figures show that the UK’s economy unexpectedly grew at a rate of 0.5% in February – a welcome sign of resilience. But turning this momentum into sustainable growth will hinge on effective financial management – essential for long term success.

Although many are currently prioritising stability, sustainable growth is still within reach with the right approach. By making use of data and insights from the finance team, companies can pinpoint efficient paths to expansion. However, this relies on having real-time information at their fingertips to support agile, well-timed decisions.

While achieving growth may be tough to come by this year, businesses can stay on track by adopting a few essential strategies. 

Improving efficiency by eliminating finance bottlenecks

Growth is the ultimate goal for any business, but it must be managed carefully to ensure long-term sustainability. Uncertain times present an opportunity to eliminate inefficiencies and build a strong foundation for future success.

A significant bottleneck for many businesses is the finance function’s reliance on manual processes for invoice processing, reporting and reconciliation. These tasks are not only time-consuming but also introduce errors, delays and inefficiencies. As a result, finance teams become stretched thin. Our recent survey found that, on average, over half (51%) of accounts payable time is spent on manual tasks – severely limiting finance leaders’ ability to drive strategic growth.

Repetitive tasks such as data entry, reconciliation, and approvals require considerable time and effort, slowing down decision-making and increasing the risk of inaccuracies. Given the critical role that finance plays in guiding business strategy, these inefficiencies and errors create significant roadblocks to growth.  

The pressure on finance leaders is therefore immense and while 71% of UK business leaders believe CFOs should take a central role in corporate growth initiatives, they are simply lost in a sea of manual processes and number crunching. In fact, 82% of finance leaders admit that excessive manual finance processes are hindering their organisation’s growth plans for the year ahead. To remedy this, businesses must embrace automation.

Achieving sustainable growth with automation

By replacing manual spreadsheets with automated solutions, finance teams can eliminate administrative burdens and focus on strategic initiatives. Automation simplifies critical finance tasks like bank feeds, coding bookkeeping transactions and invoice matching. Beyond this, it can also help alleviate the strain of more complex and time-intensive responsibilities, including tax filings, invoices and payroll.

The benefits of automation extend far beyond time saving, to accuracy, improving business visibility and enabling real-time financial insights. With fewer errors and faster-data processing, finance leaders can shift their focus to high-value tasks like driving strategy, identifying risks and opportunities and determining the optimal timing for growth investments.

Attracting investors with operational efficiency 

Once businesses have minimised time spent on administrative tasks, they can focus on the bigger picture: growth and securing investment. With access to cheap capital becoming increasingly difficult, businesses must position themselves wisely to attract funding.  

Investors favour lean, efficient companies, so demonstrating that a business can achieve more with fewer resources signals a commitment to financial prudence and sustainability. By embracing automation, companies can showcase their ability to manage operations efficiently, instilling confidence that any new investment will be spent and used wisely.

Economic uncertainty provides an opportunity to reassess business foundations and create more agile operations. Refining workflows and eliminating bottlenecks not only improves performance but also strengthens investor confidence by demonstrating a long-term commitment to financial health.

Additionally, strong financial reporting and effective cash flow management are crucial to standing out to investors. Clear, real-time insights into financial health demonstrate resilience and highlight a business’ resilience and readiness for growth.

The growth journey ahead

Though the landscape remains tough for UK businesses, sustainable growth is still achievable with a clear and focused strategy. By empowering finance leaders to step into more strategic and high-level decision making roles, organisations can stay resilient and agile amid ongoing economic headwinds.

UK businesses have fought to stay afloat, so now is the time to rebuild strength. By embracing more strategic financial management to build resilience, they can set the stage for long-term, sustainable growth, whatever the economic climate brings.

Continue Reading

Business

The Consortium Conundrum: Debunking Modern Fraud Prevention Myths

By Husnain Bajwa, SVP of Product, Risk Solutions, SEON


As digital threats escalate, businesses are desperately seeking comprehensive solutions to counteract the growing complexity and sophistication of evolving fraud vectors. The latest industry trend – consortium data sharing – promises a revolutionary approach to fraud prevention, where organisations combine their data to strengthen fraud defences.

It’s understandable how the consortium data model presents an appealing narrative of collective intelligence: by pooling fraud insights across multiple organisations, businesses hope to create an omniscient network capable of instantaneously detecting and preventing fraudulent activities.

And this approach seems intuitive – more data should translate to better protection. However, the reality of data sharing is far more complex and fundamentally flawed. Overlooked hurdles reveal significant structural limitations that undermine the effectiveness of consortium strategies, preventing this approach from fulfilling its potential to safeguard against fraud. Here are several key misconceptions about how consortium approaches fail to deliver promised benefits.


Fallacy of Scale Without Quality


One of the most persistent myths in fraud prevention mirrors the trope of enhancing a low-resolution image to reveal more explicit details. There’s a pervasive belief that massive volumes of consortium data can reveal insights not present in any of the original signals. However, this represents a fundamental misunderstanding of information theory and data analysis.

To protect participant privacy, consortium approaches strip away critical information elements relevant to fraud detection. This includes precise identifiers, nuanced temporal sequences and essential contextual metadata. Through the loss of granular signal fidelity required to anonymise information to make data sharing viable, said processes skew data while eroding its quality and reliability. The result is a sanitised dataset that bears little resemblance to the rich, complex information needed for effective fraud prevention. Further, embedded reporting biases from different entities can likewise exacerbate quality issues. Knowing where data comes from is imperative, and consortium data frequently lacks freshness and provenance.

Competitive Distortion is a Problem


Competitive dynamics can impact the efficacy of shared data strategies. Businesses today operate in competitive environments marked by inherent conflicts, where companies have strategic reasons to restrict their information sharing. The selective reporting of fraud cases, intentional delays in sharing emerging fraud patterns and strategic obfuscation of crucial insights can lead to a “tragedy of the commons” situation, where individual organisational interests systematically degrade the potential of consortium information sharing for the collective benefit.

Moreover, when direct competitors share data, organisations often limit their contributions to non-sensitive fraud cases or withhold high-value signals that reduce the effectiveness of the consortium dynamics.

Anonymisation’s Hidden Costs


Consortiums are compelled to aggressively anonymise data to sidestep the legal and ethical concerns of operating akin to de facto credit reporting agencies. This anonymisation process encompasses removing precise identifiers, truncating temporal sequences, coarsening behavioural patterns, eliminating cross-entity relationships and reducing contextual signals. Such extensive modifications limit the data’s utility for fraud detection by obscuring the details necessary for identifying and analysing nuanced fraudulent activities.

These anonymisation efforts, needed to preserve privacy, also mean that vital contextual information is lost, significantly hampering the ability to detect fraud trends over time and diluting the effectiveness of such data. This overall reduction in data utility illustrates the profound trade-offs required to balance privacy concerns with effective fraud detection.

The Problem of Lost Provenance


In the critical frameworks of DIKA (Data, Information, Knowledge, Action) and OODA (Observe, Orient, Decide, Act), data provenance is essential for validating information quality, understanding contextual relevance, assessing temporal applicability, determining confidence levels and guiding action selection. However, once data provenance is lost through consortium sharing, it is irrecoverable, leading to a permanent degradation in decision quality.

This loss of provenance becomes even more critical at the moment of decision-making. Without the ability to verify the freshness of data, assess the reliability of its sources or understand the context in which it was collected, decision-makers are left with limited visibility into preprocessing steps and a reduced confidence in their signal interpretation. These constraints hinder the effectiveness of fraud detection efforts, as the underlying data lacks the necessary clarity for precise and timely decision-making.

The Realities of Fraud Detection Techniques


Modern fraud prevention hinges on well-established analytical techniques such as rule-based pattern matching, supervised classification, anomaly detection, network analysis and temporal sequence modelling. These methods underscore a critical principle in fraud detection: the signal quality far outweighs the data volume. High-quality, context-rich data enhances the effectiveness of these techniques, enabling more accurate and dynamic responses to potential fraud.

Despite the rapid advancements in machine learning (ML) and data science, the fundamental constraints of fraud detection remain unchanged. The effectiveness of advanced ML models is still heavily dependent on the quality of data, the intricacy of feature engineering, the interpretability of models and adherence to regulatory compliance and operational constraints. No degree of algorithmic sophistication can compensate for fundamental data limitations.

As a result, the core of effective fraud detection continues to rely more on the precision and context of data rather than sheer quantity. This reality shapes the strategic focus of fraud prevention efforts, prioritising data integrity and actionable insights over expansive but less actionable data sets.

Evolving Into Trust & Safety: The Imperative for High-Quality Data


As the scope of fraud prevention broadens into the more encompassing field of trust and safety, the requirements for effective management become more complex. New demands, such as end-to-end activity tracking, cross-domain risk assessment, behavioural pattern analysis, intent determination and impact evaluation, all rely heavily on the quality and provenance of data.

In trust and safety operations, maintaining clear audit trails, ensuring source verification, preserving data context, assessing actions’ impact, and justifying decisions become paramount.

However, the nature of consortium data, which is anonymised and decontextualised to protect privacy and meet regulatory standards, cannot fundamentally support clear audit trails, ensure source verification, preserve data context, and readily assess the impact of actions to justify decisions. These limitations showcase the critical need for organisations to develop their own rich, contextually detailed datasets that retain provenance and can be directly applied to operational needs to ensure that trust and safety measures are comprehensive, effectively targeted, and relevant.

Rethinking Data Strategies


While consortium data sharing offers a compelling vision, its execution is fraught with challenges that diminish its practical utility. Fundamental limitations such as data quality concerns, competitive dynamics, privacy requirements and the critical need for provenance preservation undermine the effectiveness of such collaborative efforts. Instead of relying on massive, shared datasets of uncertain quality, organisations should pivot toward cultivating their own high-quality internal datasets.

The future of effective fraud prevention lies not in the quantity of shared data but in the quality of proprietary, context-rich data with clear provenance and direct operational relevance. By building and maintaining high-quality datasets, organisations can create a more resilient and effective fraud prevention framework tailored to their specific operational needs and challenges.

Continue Reading

Copyright © 2021 Futures Parity.