Business
Why success brings its own challenges in IT: Fixing the real-time problems of real-time event streaming with an event portal

Spokesperson: Tom Fairbairn, Distinguished Engineer, Solace
Amidst the recent hype around generative AI, there is a perhaps quieter technology revolution going on: the increasing adoption of event-driven architecture (EDA), an example of which is “event streaming” across the enterprise. Regardless of industry, (manufacturing, retail or even HR, amongst others), adoption of EDA is increasing use cases across the board as organisations realise the value of EDA. From decoupling applications, the ability to be more responsive to internal and external user requests, and reducing operational costs, EDA ultimately allows enterprises the freedom to become more agile and real-time in their day-to-day activities.
A recent IDC Infobrief found that of those who have deployed EDA across multiple use cases, 93% say it either met or exceeded their expectations. Going even further, 82% of those surveyed have stated their plans to apply EDA across their company for a further two/three use cases, within the next 24 months.
Increasing adoption and time-to-market scrambles, along with bigger data flows and more complex deployments lead to difficulties in getting real visibility across a complex, and often rapidly scaling ecosystem of events.
As a result, some are struggling to realise the best return on their EDA investment. The issue can sometimes compound, as an organisation may have reached a stage of maturity regarding multiple streaming use cases, but are left with siloed brokers, clusters, topics and schematics weaved on top of each other. A layer cake effect occurs in the architecture, making it difficult to truly see where they can optimise the information insights chain.
Growing pains are a natural part of scalability, and EDA is no exception.
Three pain points typically arise:
1-Poor visibility requires a clear and clean portal
One of the unsung heroes of EDA is application decoupling, which aids flexibility and agility. However, this can often be detrimental to visibility, making changes to existing applications difficult and risky. This lack of visibility leads to confused ownership, as event producers don’t know who the consumers are, and consumers don’t know who the publishers are. Some event streaming systems, such as Kafka, rely on static or near-static topics which can be abandoned or duplicated.
This directly affects the end-user experience. Say a person is using the application every day and wants to add an extra attribute to the data – they’d have no direct line to contact because they can’t see who produced the data.
An event portal’s single window into the structure of an EDA benefits the event-streaming ecosystem by acting as a native discovery agent to quickly produce visual aids to help locate consumers and producers. Architects and administrators on the business side are able to view the relationship between producers and consumers, as well as their event interface and KPIs, including their most and least-used topics.
Another issue that occurs when scaling infrastructure and information flows is the inability to fully grasp the downstream impact of changes made to the architecture. Evolving and architecture will require visibility on whether microservices are affected by a given event before even deploying a new feature or function, to ensure it doesn’t bring down that system. This must be done sometimes in seconds. While real-time is part of the solution, an event portal can automate the scanning of an internal system, helping visualise a complete view of endpoints and event streams, rather than making the business manually draw them between microservices. For those looking to better understand their infrastructure, particularly as they scale up, an event portal helps deploy new services faster.
2-Event Streaming is not a one-and-done – the issue with limited sharing and reuse
Data has been the new oil for quite some time, and much like oil, when one needs to see data, they often need it right there and then. Real-time data is the most valuable data that can exist within your organisation, but if it is siloed in one specific department, overall business decision-makers don’t know about it, and that means it is not providing its full potential.
Developers must understand that their hard work won’t be realised until they can catalogue all that data and the benefits that come with it. Otherwise, it may get stale and quickly become out of date.
An event portal can create a real-time catalogue of event data, able to list all topics, event streams and schemas for each application, as well as the owner and best point of contact. Not only does this speed up internal developments by letting developers share, discover, and re-use existing event-streaming assets, but they can do so outside of the organisation so that customers and partners can benefit from it too – an often neglected consideration when technologies such as Kafka focus on operational management.
3-Security and governance in event streams
The fact that event-driven systems are dynamic and decentralised is a major selling point. However, that can run the risk of unique security challenges.
A trade-off that commonly occurs when implementing event streaming is that, as they include access control rules, developers tend to be too permissive to ensure agility, creating a lack of visibility or proper cataloguing of the data. With time, that runs the risk of interfering with business data security, governance and compliance. The issue only gets worse over time as the use of data evolves across the applications, with each new human addition.
With a thinning attack surface and an increase in never-before-seen malware variants, security in business must pivot from reactive to proactive.
An event portal helps users organise systems into application domains, create and import payload schema definitions in a variety of formats and better define event interactions. This improves IT governance by more easily allowing control for who can access which resources while giving those the ability to create and track every version of EDA objects as they evolve.
By putting visibility at the forefront, administrators can spend less time worrying about ensuring their security is in line with internal and regulatory policies. Outside of real-time visibility, the true value of event-driven architecture lies in the ability to easily spot and re-use all of these data assets. However, that can only be done if they are documented, managed and governed properly. The event portal is necessary to solve this, acting as a discovery tool, de facto guide, and being able to dictate the more effective way to manage event streams.
Real-time event streams and the success of powerful architectures have brought with them the need to address the rapidly growing complexity of event streaming estates. Single, multi-broker event portal technology is the solution to helping organisations discover, document, govern and manage the lifecycle of their real-time event streams across the enterprise, and it will become increasingly important as more organisations embrace EDA as a foundational platform.
You may like
Business
‘Tis the Season to be Wary: How to Protect Your Business from Holiday Season Hacking

The holiday season will soon be in full swing, but cybercriminals aren’t known for their holiday spirit. While consumers have traditionally been the prime targets for cybercriminals during the holiday season – lost in a frenzy of last-minute online shopping and unrelenting ads – companies are increasingly falling victim to calculated cyber attacks.
Against this backdrop of relaxed vigilance and festive distractions, cybercriminals are set to deploy everything from ransomware to phishing scams, all designed to capitalise on the holiday haze. Businesses that fail to prioritise their cybersecurity could end up embracing not so much “tidings of comfort and joy” as unwanted data breaches and service outages well into 2024.
Threat Landscape
With the usual winter disruptions about to kick into overdrive, opportunistic hackers are aiming to exploit organisational turmoil this holiday season. Industry research consistently indicates a substantial spike in cyber attacks targeting businesses during holidays, particularly when coupled with the following factors:
- Employee Burnout: Employee burnout is rife around the holidays. Trying to complete major projects or hit targets before the end of the year can require long hours and intense workweeks. Overwrought schedules combined with the seasonal stressors of Christmas shopping, family politics, travel expenses, hosting duties etc., can lead to a less effective and exhausted workforce.
- Vacation Days: The holiday season is a popular time for employees to use up their vacation days and paid time off. This means offices are often emptier than usual during late December and early January. With fewer people working on-site, critical security tasks are neglected and gaps in security widen.
- Network Strain: The holidays also mark a period of network strain due to increased traffic and network requests. Staff shortages also reduce organisational response capacity if systems are compromised. The result is company networks that are understaffed and overwhelmed.
Seasonal Cyber Attacks
There are many ways bad actors look to exploit system vulnerabilities and human errors to breach defences this time of year. But rather than relying solely on sophisticated hacking techniques, most holiday-fueled cyber attacks succeed through tried and true threat vectors:
- Holiday-Themed Phishing and Smishing Campaigns: Emails and texts impersonating parcel carriers with tracking notifications contain fraudulent links, deploying malware or capturing account credentials once clicked by unwitting recipients trying to track deliveries. A momentary slip-up is all it takes to unleash malware payloads granting complete network access.
- Fake Charity Schemes: Malicious links masquerading as holiday philanthropy efforts compromise business accounts when donated to.
- Remote Access Exploits: External connectivity to internal networks comes with the territory of the season. However, poorly configured cloud apps and public Wi-Fi access points create openings for criminals to intercept company data from inadequately protected employee devices off-site.
- Ransomware Presents: Empty offices combined with delayed threat detection gives innovative extortion malware time to wrap itself around entire company systems and customer data before unveiling a not so jolly ransom note on Christmas morning.
Without proper precautions, the impact from misdirected clicks or downloads can quickly spiral across business servers over the holidays, leading to widespread data breaches and stolen customer credentials.
Essential Steps to Safeguard Systems
While eliminating all risks remains unlikely and tight budgets preclude launching entirely new security initiatives this holiday season, businesses can deter threats and address seasonal shortcomings through several key actions:
Prioritise Core Software Updates
Hardening network infrastructure is the first line of defence this holiday season. With many software products reaching end-of-life in December, it is critical to upgrade network architectures and prioritise core software updates to eliminate known vulnerabilities. Segmenting internal networks and proactively patching software can cut off preferred access routes for bad actors, confining potential breaches when hacking attacks surge.
Cultivate a Culture of Cybersecurity Awareness
Cybersecurity awareness training makes employees more resilient to rising social engineering campaigns and phishing links that increase during the holidays. Refreshing employees on spotting suspicious emails can thwart emerging hacking techniques. With more distractions and time out of the office this season, vigilance is more important than ever! Train your staff to “never” directly click a link from an email or text. Even if they are expecting a delivery they should still go directly to the known trusted source.
Manage Remote Access Proactively
Criminals aggressively pursue any vulnerabilities exposed during the holiday period to intercept financial and customer data while defences lie dormant. Therefore, businesses should properly configure cloud apps and remote networks before the holiday season hits. This will minimise pathways for data compromise when employees eventually disconnect devices from company systems over the holidays.
Mandate Multifactor Authentication (MFA)
Most successful attacks stem from compromised user credentials. By universally mandating MFA across all access points this season, retailers add critical layers of identity verification to secure systems. With MFA fatigue setting in over holidays, have backup verification methods ready to deter credential stuffing.
Prepare to Respond, Not Just Prevent
Despite precautions, holiday disasters can and do occur. Businesses need response plans for periods of disruption and reduced capacity. Have emergency communications prepared for customers and partners in case an attack disrupts operations. The time to prepare is before vacation schedules complicate incident response. It’s important to know how and when to bring in the right expertise if a crisis emerges.
By following best practices to prevent cybersecurity standards slipping before peak winter months, companies can enjoy the holidays without becoming victims of calculated cyber attacks. With swift and decisive action there is still time for businesses to prepare defences against holiday season hacks.
Business
Transforming unified comms to future-proof your business

By Jonathan Wright, Director of Products and Operations at GCX
Telephony is not usually the first thing SMBs think about when it comes to their digital transformation. However, push and pull factors are bringing it up the priority list and leading them to rethink their approach.
Indeed, it is just one year until PSTN (the copper-based telephone network) will be switched off by BT Openreach. With a recent survey showing that as many as 88% of UK businesses rely on PSTN, many organisations’ hands are being forced to review their communications ahead of the deadline.
But even if this change for some is being forced upon them, the benefits of building a more future-proofed unified communications strategy far outweigh the associated challenges. Nearly three-quarters of employees in UK SMEs now work partly or fully remotely, indeed the highest percentage of any G7 country. Voice over Internet Protocol (VoIP) telephone systems are much better suited to distributed workforces as the phone line is assigned on a user basis, rather than to a fixed location.
And with more companies now integrating AI capabilities to augment their products and services – like Microsoft Teams Pro which leverages OpenAI for improved transcription, automated notes generation and recommended actions – the productivity-boosting benefits for users are only improving.
Making the right choice
For those companies that are seizing the opportunity to change their unified comms in 2024, what should they consider when making their decision?
- Choose platforms that will boost user adoption – User adoption will make or break the rollout of a new IT project. So due consideration should be given to what products or services will have the path of least resistance with employees. Choosing a service or graphical user interface (GUI) users are already used to, like Zoom or MS Teams, is likely to result in a higher adoption rate than a net new service.
- Embrace innovation with AI capabilities – While some of the services leveraging AI and Large Language Model (LLM) to enhance their capabilities are more expensive than traditional VoIP, the productivity gains could offer an attractive return on investment for many small businesses. Claiming back the time spent typing up meeting notes, or improving the response time to customer calls with automatically-generated actions, will both have tangible benefits to the business. That said, companies should consider what level of service makes sense to their business; they may not need the version with all the bells and whistles to make significant efficiency gains.
- Bring multiple services under a single platform – The proliferation of IT tools is becoming an increasing challenge in many businesses; it creates silos that hamper collaboration, leaves employees feeling overwhelmed by the sheer number of communications channels to manage, and leads to mounting costs on the business. Expanding the use of existing platforms, or retiring multiple solutions by bringing their features together in one new platform, benefits the business and user experience alike.
- Automate onboarding to reduce the burden on IT – Any changes to unified comms should aim to benefit all of the different stakeholders – and that includes the IT team tasked with implementing and managing it. Choosing platforms which support automated onboarding and activation, for example, will reduce the burden on IT when provisioning new tenants, as well as with the ongoing policy management. What’s more, it reduces the risk of human error when configuring the setup to improve the overall security. Or, in the case of Microsoft Teams, even negates the need for Microsoft PowerShell.
- Consider where you work – Employees are not only working between home and the office more. Since the pandemic, more people are embracing the digital nomad lifestyle, while others are embracing the opportunity to work more closely with clients on-site or at their offices. This should be considered in unified comms planning as those companies with employees working outside the UK will need to choose a geo-agnostic service.
- Stay secure – Don’t let security and data protection be an afterthought. Opt for platforms leveraging authentication protocols, strong encryption, and security measures to safeguard sensitive information and support compliance.
Making the right switch
As many small businesses start planning for changes in their telephony in 2024 as the PSTN switch-off approaches, it is important that take the time to explore how the particular requirements of their organisations and how the changes to their communications could better support their new working practices and boost productivity.
Business
Will your network let down your AI strategy?

Rob Quickenden, CTO at Cisilion
As companies start to evaluate how they can use AI effectively, there is a clear need to ensure your network is up to the challenges of AI first. AI applications are going to require your data to be easily accessible and your network will need to be able to handle the huge compute needs of these new applications. It will also need to be secure enough at all points of access for the different applications to end users’ different devices. If your network isn’t reliable, readily available and secure it is likely going to fail.
In Cisco’s 2023 Networking Report 41% of networking professional across 2,500 global companies said that providing secure access to applications distributed across multiple cloud platforms is their key challenge, followed by gaining end-to-end visibility into network performance and security (37%).
So, what can you do to make your network AI ready?
First, you need to see AI as part of your digital transformation, then you need to look at where you need it and where you don’t. Jumping on the bandwagon and implementing AI for the sake of it isn’t the way forward. You need to have a clear strategy in place about where and how you are going to use AI. Setting up an AI taskforce to look at all aspects of your AI strategy is a good first step. They need to be able to identify how AI can help transform your business processes and free up time to focus on your core business. At the same time, they need to make sure your infrastructure can handle your AI needs.
Enterprise networks and IT landscapes are growing more intricate every day. The demand for seamless connectivity has skyrocketed as businesses expand their digital footprint and hybrid working continues. The rise of cloud services, the Internet of Things (IoT), and data-intensive applications have placed immense pressure on traditional network infrastructures and AI will only increase this burden. AI requires much higher levels of compute power too. The challenge lies in ensuring consistent performance, security, and reliability across a dispersed network environment.
Use hybrid and multi-cloud to de-silo operations
According to Gartner’s predictions, by 2025, 51% of IT spending will shift to the cloud. Underscoring the importance of having a robust and adaptable network infrastructure that can seamlessly integrate with cloud services. This is even more important with AI as it needs to access data from different locations and sources across your business to be successful. For example, AI often requires data from different sources to train models and make predictions. A company that wants to develop an AI system to predict customer churn may need to access data from multiple sources such as customer demographics, purchase history and social media activity.
IT teams need to make sure that they are using hybrid cloud and multi-cloud to de-silo operations to bring together network and security controls and visibility and allow for easy access to data. Where businesses use multiple cloud providers or have some data on-premise, they need to be reviewing how that data will be used and so how to access it across departments.
Install the best security and network monitoring
It’s clear that as we develop AI for good, there is also a darker side weaponizing AI to create more sophisticated cyber-attacks. Businesses need end-to-end visibility into their network performance and security and to be able to provide secure access to applications distributed across multiple cloud platforms. This means having effective monitoring tools in place and the right layers of security – not only at the end user level but also across your network at all access points.
Being able to review and test the performance of your SaaS based applications will also be key to the success of your AI solutions. AI requires apps to work harder and faster so tasting their speed, scalability and stability, and ensuring they are up to the job and can perform well under varying workloads is important.
Secure Access Service Edge
The best way to ensure your network security is as good as it can be is to simplify your tools and create consistency by using Secure Access Service Edge (SASE). This is an architecture that delivers converged network and security as service capabilities including SD-WAN and cloud native security functions such as secure web gateways, cloud access security brokers, firewall as-a-service, and zero-trust network access. SASE delivers wide area network and security controls as a cloud computing service directly to the source of connection rather than at the data centre which will protect your network and users more effectively.
SD-WAN connectivity
If you haven’t already, extending your SD-WAN connectivity consistently across multiple clouds to automate cloud-agnostic connectivity and optimise the application experience is a must. It will enable your organisation to securely connect users, applications and data across multiple locations while providing improved performance, reliability and scalability. SD-WAN also simplifies the management of WANs by providing centralised control and visibility over the entire network.
As we head towards the new era of AI, cloud is the new data centre, Internet is the new network, and cloud offerings will dominate applications. By making sure your network is AI ready, by adopting a cloud-centric operating model, having a view of global Internet health and the performance of top SaaS applications, IT teams will be able to implement their company’s AI strategy successfully.

The state of Artificial Intelligence in 2024

‘Tis the Season to be Wary: How to Protect Your Business from Holiday Season Hacking

Transforming unified comms to future-proof your business

The Sustainability Carrot Could be More Powerful Than the Stick!

Hybrid cloud adoption: why vendors are making the switch in 2022 and why you should too
