By Venki Subramanian, SVP of Product Management at Reltio
Data drives efficiencies, improves customer experience, enables companies to identify and manage risks, and helps everyone from human resources to sales make informed decisions. It is the lifeblood of most organisations today. Sometime during the last few years, however, organisations turned a corner from embracing data to fearing it as the volume spiralled out of control. By 2025, for example, it is estimated that the world will produce 463 exabytes of data daily compared to 3 exabytes a decade ago.
Too much enterprise data is locked up, inaccessible, and tucked away inside monolithic, centralised data lakes, lake houses, and warehouses. Since almost every aspect of a business relies on data to make decisions, accessing high-quality data promptly and consistently is crucial for success. But finding it and putting it to use is often easier said than done.
That’s why many organisations are turning to “distributed data” and creating “data products” to solve these challenges, especially for core data, which is any business’s most valuable data asset. Core data or master data refers to the foundational datasets that are used by most business processes and fall into four major categories – organisations, people (individuals), locations, and products. A data product is a reusable dataset used by analysts or business users for specific needs. Most organisations are undergoing massive digital and cloud transformations. Putting high-quality core data at the centre of these transformations—and treating it as a product can yield a significant return on investment.
Customer data is one example of core or master data that firms rely on to generate outstanding customer experiences and accelerate growth by providing better products and services to consumers. However, leveraging core customer data becomes extremely challenging without timely, efficient access. The data is often trapped inside monolithic, centralised data storage systems. This can result in incomplete, inaccurate, or duplicative information. Once hailed as the saviour to the data storage and management challenge, monolithic systems escalate these problems as the volume of data expands and the urgent need for making data-driven decisions rises.
The traditional approaches for addressing data challenges entail extracting the data from the system of records and moving it to different data platforms, such as operational data stores, data lakes, or data warehouses, before generating use case-specific views or data sets. In addition, because of the creation of use case-specific data sets that are subsequently exploited by use case-specific technologies, the overall inefficiency of this process increases.
One inefficiency arises from the complexity of such a landscape, which involves the movement of data from many sources to various data platforms, the creation of use case-specific data sets, and the use of multiple technologies for consumption. Core data for each domain, such as customer, is duplicated and reworked or repackaged for almost every use case instead of producing a consistent representation of the data used across various use cases and consumption models – analytical, operational, and real-time.
There’s also a disconnect between data ownership and the subject matter experts that need it for decision-making. Data stewards and scientists understand how to access data, move it around and create models. But they’re often unfamiliar with the specific use cases in the business. In other words, they’re experts in data modelling, not finance, human resources, sales, product management, or marketing. They’re not domain experts and may not understand the information needed for specific use cases, leading to frustration and data going unused. It’s estimated, for example, that 20% or fewer of data models created by data scientists are deployed.
Distributed Data Architecture – An Elegant Solution to a Messy Problem
The broken promises of monolithic, centralised data storage have led to the emergence of a new approach called “distributed” data architectures, such as data fabric and data mesh. A data mesh can create a pipeline of domain-specific data sets, including core data, and deliver it promptly from its source to consuming systems, subject matter experts, and end users.
These data architectures have arisen as a viable solution for the issues created by inaccessible data locked away in siloed systems or rigid monolithic data architectures of the past. Data fabric decentralises the management and governance of data sets. It follows four core principles – domain ownership of data, treating data as a product and applying product principles to data, enabling a self-serve data infrastructure, and ensuring federated governance. These help data product owners create data products based on the needs of various data consumers and for data consumers to learn what data products are available and how to access and use these. Data quality, observability, and self-service capabilities for discovering data and metadata are built into these data products.
The rise of the concept of data products is helpful for analytics/artificial intelligence, and general business uses. The concept for either case is the same – the dataset can be reused without a major investment in time or resources. It can dramatically reduce the amount of time spent finding and fixing data. Data products can also be updated regularly, keeping them fresh and relevant. Some legacy companies have reported increased revenues or cost savings of over $100 million.
Data product owners have to create data products for core data to enable its activation for key initiatives and support various consumption models in a self-serve manner. The typical pattern that all these data pipelines enable can be summarised into the following three stages – collect, unify, and activate.
The process starts with identifying the core data sets – data domains like customer or product – and defining a unified data model for these. Then, data product owners need to identify the first-party data sources and the critical third-party data sets used to enrich the data. This data is assembled, unified, enriched, and provided to various consumers via APIs so that the data can be activated for various initiatives. Product principles such as the ability to consume these data products in a self-service manner, customise the base product for various usage scenarios, and deliver regular enhancements to the data are built into such data products.
Data product owners can use this framework to map out key company initiatives, identify the most critical data domains, identify the features (data attributes, relationships, etc.) and the sources of data – first and third party that needs to be assembled – to create a roadmap of data products and align them to business impact and value delivered.
With data coming from potentially hundreds of applications and the constantly evolving requirements of data consumers, poor quality data and slow and rigid architecture can cost companies in many ways, from lost business opportunities to regulatory fines to reputational risk from poor customer experience. That’s why organisations of all sizes and types need a modern, cloud-based master data management approach that can enable the creation of core data as products. A cloud-based MDM can reconcile data from hundreds of first and third-party sources and create a single trusted source of truth for an entire organisation. Treating core data as a product can help businesses drive value by treating it as a strategic asset and unlocking its immense potential to drive business impact.
‘Tis the Season to be Wary: How to Protect Your Business from Holiday Season Hacking
The holiday season will soon be in full swing, but cybercriminals aren’t known for their holiday spirit. While consumers have traditionally been the prime targets for cybercriminals during the holiday season – lost in a frenzy of last-minute online shopping and unrelenting ads – companies are increasingly falling victim to calculated cyber attacks.
Against this backdrop of relaxed vigilance and festive distractions, cybercriminals are set to deploy everything from ransomware to phishing scams, all designed to capitalise on the holiday haze. Businesses that fail to prioritise their cybersecurity could end up embracing not so much “tidings of comfort and joy” as unwanted data breaches and service outages well into 2024.
With the usual winter disruptions about to kick into overdrive, opportunistic hackers are aiming to exploit organisational turmoil this holiday season. Industry research consistently indicates a substantial spike in cyber attacks targeting businesses during holidays, particularly when coupled with the following factors:
- Employee Burnout: Employee burnout is rife around the holidays. Trying to complete major projects or hit targets before the end of the year can require long hours and intense workweeks. Overwrought schedules combined with the seasonal stressors of Christmas shopping, family politics, travel expenses, hosting duties etc., can lead to a less effective and exhausted workforce.
- Vacation Days: The holiday season is a popular time for employees to use up their vacation days and paid time off. This means offices are often emptier than usual during late December and early January. With fewer people working on-site, critical security tasks are neglected and gaps in security widen.
- Network Strain: The holidays also mark a period of network strain due to increased traffic and network requests. Staff shortages also reduce organisational response capacity if systems are compromised. The result is company networks that are understaffed and overwhelmed.
Seasonal Cyber Attacks
There are many ways bad actors look to exploit system vulnerabilities and human errors to breach defences this time of year. But rather than relying solely on sophisticated hacking techniques, most holiday-fueled cyber attacks succeed through tried and true threat vectors:
- Holiday-Themed Phishing and Smishing Campaigns: Emails and texts impersonating parcel carriers with tracking notifications contain fraudulent links, deploying malware or capturing account credentials once clicked by unwitting recipients trying to track deliveries. A momentary slip-up is all it takes to unleash malware payloads granting complete network access.
- Fake Charity Schemes: Malicious links masquerading as holiday philanthropy efforts compromise business accounts when donated to.
- Remote Access Exploits: External connectivity to internal networks comes with the territory of the season. However, poorly configured cloud apps and public Wi-Fi access points create openings for criminals to intercept company data from inadequately protected employee devices off-site.
- Ransomware Presents: Empty offices combined with delayed threat detection gives innovative extortion malware time to wrap itself around entire company systems and customer data before unveiling a not so jolly ransom note on Christmas morning.
Without proper precautions, the impact from misdirected clicks or downloads can quickly spiral across business servers over the holidays, leading to widespread data breaches and stolen customer credentials.
Essential Steps to Safeguard Systems
While eliminating all risks remains unlikely and tight budgets preclude launching entirely new security initiatives this holiday season, businesses can deter threats and address seasonal shortcomings through several key actions:
Prioritise Core Software Updates
Hardening network infrastructure is the first line of defence this holiday season. With many software products reaching end-of-life in December, it is critical to upgrade network architectures and prioritise core software updates to eliminate known vulnerabilities. Segmenting internal networks and proactively patching software can cut off preferred access routes for bad actors, confining potential breaches when hacking attacks surge.
Cultivate a Culture of Cybersecurity Awareness
Cybersecurity awareness training makes employees more resilient to rising social engineering campaigns and phishing links that increase during the holidays. Refreshing employees on spotting suspicious emails can thwart emerging hacking techniques. With more distractions and time out of the office this season, vigilance is more important than ever! Train your staff to “never” directly click a link from an email or text. Even if they are expecting a delivery they should still go directly to the known trusted source.
Manage Remote Access Proactively
Criminals aggressively pursue any vulnerabilities exposed during the holiday period to intercept financial and customer data while defences lie dormant. Therefore, businesses should properly configure cloud apps and remote networks before the holiday season hits. This will minimise pathways for data compromise when employees eventually disconnect devices from company systems over the holidays.
Mandate Multifactor Authentication (MFA)
Most successful attacks stem from compromised user credentials. By universally mandating MFA across all access points this season, retailers add critical layers of identity verification to secure systems. With MFA fatigue setting in over holidays, have backup verification methods ready to deter credential stuffing.
Prepare to Respond, Not Just Prevent
Despite precautions, holiday disasters can and do occur. Businesses need response plans for periods of disruption and reduced capacity. Have emergency communications prepared for customers and partners in case an attack disrupts operations. The time to prepare is before vacation schedules complicate incident response. It’s important to know how and when to bring in the right expertise if a crisis emerges.
By following best practices to prevent cybersecurity standards slipping before peak winter months, companies can enjoy the holidays without becoming victims of calculated cyber attacks. With swift and decisive action there is still time for businesses to prepare defences against holiday season hacks.
Transforming unified comms to future-proof your business
By Jonathan Wright, Director of Products and Operations at GCX
Telephony is not usually the first thing SMBs think about when it comes to their digital transformation. However, push and pull factors are bringing it up the priority list and leading them to rethink their approach.
Indeed, it is just one year until PSTN (the copper-based telephone network) will be switched off by BT Openreach. With a recent survey showing that as many as 88% of UK businesses rely on PSTN, many organisations’ hands are being forced to review their communications ahead of the deadline.
But even if this change for some is being forced upon them, the benefits of building a more future-proofed unified communications strategy far outweigh the associated challenges. Nearly three-quarters of employees in UK SMEs now work partly or fully remotely, indeed the highest percentage of any G7 country. Voice over Internet Protocol (VoIP) telephone systems are much better suited to distributed workforces as the phone line is assigned on a user basis, rather than to a fixed location.
And with more companies now integrating AI capabilities to augment their products and services – like Microsoft Teams Pro which leverages OpenAI for improved transcription, automated notes generation and recommended actions – the productivity-boosting benefits for users are only improving.
Making the right choice
For those companies that are seizing the opportunity to change their unified comms in 2024, what should they consider when making their decision?
- Choose platforms that will boost user adoption – User adoption will make or break the rollout of a new IT project. So due consideration should be given to what products or services will have the path of least resistance with employees. Choosing a service or graphical user interface (GUI) users are already used to, like Zoom or MS Teams, is likely to result in a higher adoption rate than a net new service.
- Embrace innovation with AI capabilities – While some of the services leveraging AI and Large Language Model (LLM) to enhance their capabilities are more expensive than traditional VoIP, the productivity gains could offer an attractive return on investment for many small businesses. Claiming back the time spent typing up meeting notes, or improving the response time to customer calls with automatically-generated actions, will both have tangible benefits to the business. That said, companies should consider what level of service makes sense to their business; they may not need the version with all the bells and whistles to make significant efficiency gains.
- Bring multiple services under a single platform – The proliferation of IT tools is becoming an increasing challenge in many businesses; it creates silos that hamper collaboration, leaves employees feeling overwhelmed by the sheer number of communications channels to manage, and leads to mounting costs on the business. Expanding the use of existing platforms, or retiring multiple solutions by bringing their features together in one new platform, benefits the business and user experience alike.
- Automate onboarding to reduce the burden on IT – Any changes to unified comms should aim to benefit all of the different stakeholders – and that includes the IT team tasked with implementing and managing it. Choosing platforms which support automated onboarding and activation, for example, will reduce the burden on IT when provisioning new tenants, as well as with the ongoing policy management. What’s more, it reduces the risk of human error when configuring the setup to improve the overall security. Or, in the case of Microsoft Teams, even negates the need for Microsoft PowerShell.
- Consider where you work – Employees are not only working between home and the office more. Since the pandemic, more people are embracing the digital nomad lifestyle, while others are embracing the opportunity to work more closely with clients on-site or at their offices. This should be considered in unified comms planning as those companies with employees working outside the UK will need to choose a geo-agnostic service.
- Stay secure – Don’t let security and data protection be an afterthought. Opt for platforms leveraging authentication protocols, strong encryption, and security measures to safeguard sensitive information and support compliance.
Making the right switch
As many small businesses start planning for changes in their telephony in 2024 as the PSTN switch-off approaches, it is important that take the time to explore how the particular requirements of their organisations and how the changes to their communications could better support their new working practices and boost productivity.
Will your network let down your AI strategy?
Rob Quickenden, CTO at Cisilion
As companies start to evaluate how they can use AI effectively, there is a clear need to ensure your network is up to the challenges of AI first. AI applications are going to require your data to be easily accessible and your network will need to be able to handle the huge compute needs of these new applications. It will also need to be secure enough at all points of access for the different applications to end users’ different devices. If your network isn’t reliable, readily available and secure it is likely going to fail.
In Cisco’s 2023 Networking Report 41% of networking professional across 2,500 global companies said that providing secure access to applications distributed across multiple cloud platforms is their key challenge, followed by gaining end-to-end visibility into network performance and security (37%).
So, what can you do to make your network AI ready?
First, you need to see AI as part of your digital transformation, then you need to look at where you need it and where you don’t. Jumping on the bandwagon and implementing AI for the sake of it isn’t the way forward. You need to have a clear strategy in place about where and how you are going to use AI. Setting up an AI taskforce to look at all aspects of your AI strategy is a good first step. They need to be able to identify how AI can help transform your business processes and free up time to focus on your core business. At the same time, they need to make sure your infrastructure can handle your AI needs.
Enterprise networks and IT landscapes are growing more intricate every day. The demand for seamless connectivity has skyrocketed as businesses expand their digital footprint and hybrid working continues. The rise of cloud services, the Internet of Things (IoT), and data-intensive applications have placed immense pressure on traditional network infrastructures and AI will only increase this burden. AI requires much higher levels of compute power too. The challenge lies in ensuring consistent performance, security, and reliability across a dispersed network environment.
Use hybrid and multi-cloud to de-silo operations
According to Gartner’s predictions, by 2025, 51% of IT spending will shift to the cloud. Underscoring the importance of having a robust and adaptable network infrastructure that can seamlessly integrate with cloud services. This is even more important with AI as it needs to access data from different locations and sources across your business to be successful. For example, AI often requires data from different sources to train models and make predictions. A company that wants to develop an AI system to predict customer churn may need to access data from multiple sources such as customer demographics, purchase history and social media activity.
IT teams need to make sure that they are using hybrid cloud and multi-cloud to de-silo operations to bring together network and security controls and visibility and allow for easy access to data. Where businesses use multiple cloud providers or have some data on-premise, they need to be reviewing how that data will be used and so how to access it across departments.
Install the best security and network monitoring
It’s clear that as we develop AI for good, there is also a darker side weaponizing AI to create more sophisticated cyber-attacks. Businesses need end-to-end visibility into their network performance and security and to be able to provide secure access to applications distributed across multiple cloud platforms. This means having effective monitoring tools in place and the right layers of security – not only at the end user level but also across your network at all access points.
Being able to review and test the performance of your SaaS based applications will also be key to the success of your AI solutions. AI requires apps to work harder and faster so tasting their speed, scalability and stability, and ensuring they are up to the job and can perform well under varying workloads is important.
Secure Access Service Edge
The best way to ensure your network security is as good as it can be is to simplify your tools and create consistency by using Secure Access Service Edge (SASE). This is an architecture that delivers converged network and security as service capabilities including SD-WAN and cloud native security functions such as secure web gateways, cloud access security brokers, firewall as-a-service, and zero-trust network access. SASE delivers wide area network and security controls as a cloud computing service directly to the source of connection rather than at the data centre which will protect your network and users more effectively.
If you haven’t already, extending your SD-WAN connectivity consistently across multiple clouds to automate cloud-agnostic connectivity and optimise the application experience is a must. It will enable your organisation to securely connect users, applications and data across multiple locations while providing improved performance, reliability and scalability. SD-WAN also simplifies the management of WANs by providing centralised control and visibility over the entire network.
As we head towards the new era of AI, cloud is the new data centre, Internet is the new network, and cloud offerings will dominate applications. By making sure your network is AI ready, by adopting a cloud-centric operating model, having a view of global Internet health and the performance of top SaaS applications, IT teams will be able to implement their company’s AI strategy successfully.