Connect with us

Business

The future of software development and the unstoppable role of low-code

By Hans de Visser, Chief Product Officer at Mendix, a Siemens business

Gartner has predicted that by 2025, 70 percent of all new enterprise software will be developed using low-code. The future of software development will therefore be massively characterised by low-code. This will have a significant impact on how and by whom enterprise software will be developed going forward.

The prevalent use of low-code is already manifesting itself today in large and medium-sized companies as well as start-ups. In times of a worsening shortage of skilled labour, corporate decision-makers should therefore create suitable structures to ensure their ability to innovate and remain competitive.

Organisations across the industry are under enormous pressure. The reasons for this may not be new, but they are no less pressing: advancing digitalisation, rapid developments in artificial intelligence, the shortage of skilled workers and increasing climatic and geopolitical events that have led to unstable supply chains or have called previous production locations into question. Each organization is working to address its own unique sword of Damocles forged from any combination of these issues. Corporate decision-makers are working not only to prevent the proverbial disasters that sword may represent, but also drive the goals of their business forward. Software will be at the heart of the strategic management of these challenges and play an even more decisive role in the future.

Application development as the linchpin of digital transformation

Many decision-makers now realise that application development flows through the veins of an entire organization and that the solutions that are developed and deployed fuel everything from HR, finance, marketing, and sales to production, warehousing, supply chain and customer experience. A significant part of a company’s success stands and falls with application development.

For example, if the aim is to increase efficiency and expand innovative capacity: this requires applications for process optimisation and automation, which ultimately ensure a significant increase in operational efficiency. When basic, repetitive tasks are automated, the workforce can focus on higher-value tasks and projects. This leaves more time to create innovative products, solutions or services, yielding a clear competitive advantage.

Software also plays a decisive role in the design of customer experiences: By developing user-friendly applications that create added value, companies can increase customer satisfaction, intensify customer loyalty and build long-term customer relationships. Customers who already benefit from digital offerings from a wide range of service providers in their private lives expect B2B applications to offer comparable intuitive functionality and rich, modern interfaces.

In order to create a customer journey that is as seamless as possible and ensures an optimal customer experience, customer touchpoints and customer behaviour must be evaluated. Software is also used to collect, analyse and ultimately democratise the data and insights already available in the company at a central location. Making this data easy to access and evaluate for all stakeholders allows organizations to derive the most value from its data.

The backbone of a thriving software development lifecycle is a thoughtful data strategy. Well managed data leads to insights about your customers and end users that can reshape your existing products and services as well as inform and define new ones. The growing use of AI in enterprise development also demands an organized data landscape.

Transformation here and now – digitalise faster with fusion teams and low-code

What do you do when, on the one hand, new applications constantly need to be developed for areas such as customer experience or data management? And if, on the other hand, the IT department cannot be staffed to cope with its growing tasks – a situation that is being cancelled out by the prevailing shortage of skilled workers? Then the workload is unmanageable, the IT backlog grows and increasingly slows down companies in their digitalisation efforts. Especially in view of the ongoing shortage of skilled labour, which, according to Bitkom, is expected to lead to up to 663,000 unfilled IT positions by 2040. Company decision-makers must therefore be prepared for the fact that there is no easing in sight on the labour market and that the “war for (IT) talent” will continue.

Companies that now hope that the solution to the problem lies in the use of commercial off-the-shelf solutions (COTS) will quickly realise that this approach is rather counterproductive. The acquisition of individual solutions creates further data silos that make scaling more difficult. COTS also almost always require significantly more IT resources than what is advertised to implement. The allure of “out-of-the-box” COTS functionality is more pipe dream than promise.

The development of enterprise applications requires a holistic approach with goals that are tailored to the company and its specific requirements and needs.

Does this mean that corporate decision-makers are faced with an unsolvable problem? Not at all. Enterprise application development with low-code offers a way out of this dilemma and is therefore the decisive step for many organisations to counter the personnel and time limitations of software development with classic high-code.

Thanks to its visual character and drag-and-drop functionalities, low-code development also enables people without a development background to make a significant contribution to the software development process. Fusion or BizDevOps teams, which are made up of professional developers and technically minded people from other departments, not only allow significantly more software products to be launched, but also increase the quality of the software developed, partly because the future users are involved from the outset.

Fusion teams accelerate every phase of development by breaking down silos and facilitating the exchange of knowledge and feedback. The collaborative approach eliminates bottlenecks and streamlines the development process.

A global survey of 700 Mendix users from 2023 shows that the concept of fusion teams not only looks good on paper, but is a real trend: 47 per cent of developers stated that they already work in cross-departmental teams.

The Lünendonk 2023 “Cloud, Data & Software” study speaks even more clearly: 76% of the companies surveyed stated that they wanted to dissolve their previously separate organisational structures and transfer holistic responsibility to BizDevOps teams.

Democratisation of software development for greater corporate success

It is therefore time to restructure teams or even entire departments in order to accelerate the change in software development and ensure that the digitalisation of companies keeps pace through the use of fusion teams. This relevance must also be anchored in the minds of corporate decision-makers, who should quickly begin to identify talent within the company, support them with concrete upskilling offers and arouse interest in digital skills, be it with low-code workshops or further training on the potential of AI tools.

If you look at the latest global surveys, the analysts at Gartner also recognise in the “CIO Agenda 2024″ that shared responsibilities should be cultivated. Not only in the form of fusion teams, but also at C-level. According to Gartner, when CxOs pool their resources, CIOs are twice as likely to achieve or exceed their goals compared to CIOs who leave IT delivery to their department alone.

The establishment of fusion teams and initial positive experiences with the resulting better distribution and utilisation of available resources as well as the increase in quality in software development can demonstrate how worthwhile the concept is. Establishing collaboration not only across departments, but also at different hierarchical levels of an organisation is therefore very promising.

The collaborative nature of low-code can provide the impetus for further changes in the organisation. Decision-makers no longer have to be held back by staff shortages and backlogged IT projects; instead they can enable their employees to acquire new skills, think outside the box and work together on new things. Software, as a driver of corporate transformation, is therefore no longer developed by one for all, but rather by all for one – the entire organisation. This supports the change that many companies are still struggling with. It is therefore time to get started.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Why Resilience Is Replacing Prevention as the Defining Cybersecurity Strategy

by Manuel Sanchez, Information Security and Compliance Specialist, iManage

For decades, cybersecurity centered around prevention. Build the right walls around your perimeter, deploy the right tools, train your people not to click the wrong links, and you could keep the bad actors out.

Today, the question driving security strategy is no longer “how do we stop a breach?” but “how do we survive one?” It is a subtle but profound shift in philosophy, and it is reshaping everything from how IT and Security leaders structure their teams to how they select their vendors and deploy AI.

Rehearsing for the worst

The practical expression of this shift is visible in how security teams are being restructured. Organisations are establishing dedicated disaster recovery teams – not to prevent incidents, but to contain and recover from them when they occur. These teams maintain detailed, regularly updated playbooks covering everything from backup restoration to stakeholder communications, with roles pre-assigned and procedures rehearsed well in advance.

In many ways, this mirrors the logic behind disaster drills: fire alarms matter, but knowing the evacuation routes and the post-incident recovery plan determines how well an organisation survives. Critically, responsibility cannot rest with the CISO alone. Business continuity after a cyber incident is a whole-company challenge – which means every core part of the organisation is involved to sustain critical business operations.

Governance in the gray areas

Running alongside this shift is a governance crisis that is easy to underestimate until it becomes a serious risk. As organisations adopt more applications across more vendors and hosting services, the shared responsibility model that was supposed to keep cloud accountability clear has become increasingly difficult to enforce.

The sheer volume of cloud applications in use at any given enterprise is too vast for consistent governance under current approaches – and bad actors have become skilled at identifying exactly where vendor responsibility ends, and customer accountability begins, then operating precisely in that “gray area”. Being aware of this risk and putting preventative measures in place is important, but recognising the role these cloud applications play and the impact to key business operations if these applications were compromised, is critical.

Meanwhile, data volumes continue to grow exponentially, and unstructured data continues to accumulate in the background across many digital systems. Why is this important? If you don’t know what data you have, where it is stored, who has access to it, and, most importantly, how it is protected – onsite or cloud backup – this makes the recovery process a lot harder.

AI agents on the rise – and with it new risks

Although the focus of this article is on resilience, prevention must still remain an essential part of your defences. On that front, the accelerating adoption of autonomous AI in cyber defence tasks is reshaping security operations as visibly as anything else happening in the field right now. The volume, speed, and sophistication of modern threats have simply outpaced what human analysts can manage in real time.

The shift is toward AI that doesn’t just flag anomalies for human review, but actively detects, analyses, and neutralises threats as they emerge, even using predictive models to anticipate attacks before they fully materialise. This frees human experts to focus on strategic decisions and complex defence work rather than spending their days firefighting.

Autonomous AI does, however, introduce risks of its own. When AI agents operate across systems – accessing sensitive repositories, triggering actions, sharing data – they expand the attack surface in ways that aren’t always immediately visible.

Managing the digital identities of AI agents, much like managing employee access credentials, is becoming a critical security discipline. Accordingly, comprehensive traceability frameworks that log every action an agent takes are no longer optional; they are the foundation of responsible AI deployment in any security context.

The supply chain wake-up call

The case for moving from a “prevention” mindset to a “resilience” one is further bolstered by recent high-profile breaches via compromised managed service providers, which have forced a fundamental reset in how organisations evaluate their vendors.

The era of cost-first selection is over. Security credentials, demonstrated through continuous and verifiable evidence, are now non-negotiable for any provider hoping to retain enterprise clients – and what organisations are demanding goes well beyond point-in-time audits. They want real-time visibility into every third-party integration, every software update, and every vendor interaction – including the cloud services the vendors themselves use.

“Trust but verify” has become the operational standard, and providers who cannot demonstrate validated controls and live monitoring are finding themselves out of contention. It is a structural shift that will reshape the vendor landscape considerably — and it is already underway.

A new era demands a new approach

In the end, prevention still matters, but resilience – instilled via the key focus areas above – is what turns disruption into survivable events rather than existential crises. The organisations that are honest about the limits of prevention and embrace the shift towards resilience won’t just better withstand the next wave of attacks – they’ll be differentiating themselves from competitors still clinging to yesterday’s playbook.

Continue Reading

Business

Adapting compliance in a fragmented regulatory world

Rasha Abdel Jalil, Director of Financial Crime & Compliance at Eastnets, discusses the operational and strategic shifts needed to stay ahead of regulatory compliance in 2025 and beyond.

As we move through 2025, financial institutions face an unprecedented wave of regulatory change. From the EU’s Digital Operational Resilience Act (DORA) to the UK’s Basel 3.1 rollout and upcoming PSD3, the volume and velocity of new requirements are constantly reshaping how banks operate.

But it’s not just the sheer number of regulations that’s creating pressure. It’s the fragmentation and unpredictability. Jurisdictions are moving at different speeds, with overlapping deadlines and shifting expectations. Regulators are tightening controls, accelerating timelines and increasing penalties for non-compliance. And for financial compliance teams, it means navigating a landscape where the goalposts are constantly shifting.

Financial institutions must now strike a delicate balance: staying agile enough to respond to rapid regulatory shifts, while making sure their compliance frameworks are robust, scalable and future-ready.

The new regulatory compliance reality

By October of this year, financial institutions will have to navigate a dense cluster of regulatory compliance deadlines, each with its own scope, jurisdictional nuance and operational impact. From updated Common Reporting Standard (CRS) obligations, which applies to over 100 countries around the world, to Australia’s new Prudential Standard (CPS) 230 on operational risk, the scope of change is both global and granular.

Layered on top are sweeping EU regulations like the AI Act and the Instant Payments Regulation, the latter coming into force in October. These frameworks introduce new rules and redefine how institutions must manage data, risk and operational resilience, forcing financial compliance teams to juggle multiple reporting and governance requirements. A notable development is Verification of Payee (VOP), which adds a crucial layer of fraud protection for instant payments. This directly aligns with the regulator’s focus on instant payment security and compliance.

The result is a compliance environment that’s increasingly fragmented and unforgiving. In fact, 75% of compliance decision makers in Europe’s financial services sector agree that regulatory demands on their compliance teams have significantly increased over the past year. To put it simply, many are struggling to keep pace with regulatory change.

But why is it so difficult for teams to adapt?

The answer lies in a perfect storm of structural and operational challenges. In many organisations, compliance data is trapped in silos spread across departments, jurisdictions and legacy platforms. Traditional approaches – built around periodic reviews, static controls and manual processes – are no longer fit for purpose. Yet despite mounting pressure, many teams face internal resistance to changing established ways of working, which further slows progress and reinforces outdated models. Meanwhile, the pace of regulatory change continues to accelerate, customer expectations are rising and geopolitical uncertainty adds further complexity.

At the same time, institutions are facing a growing compliance talent gap. As regulatory expectations become more complex, the skills required to manage them are evolving. Yet many firms are struggling to find and retain professionals with the right mix of legal, technical and operational expertise. Experienced professionals are retiring en-masse, while nearly half of the new entrants lack the right experience needed to step into these roles effectively. And as AI tools become more central to investigative and decision-making processes, the need for technical fluency within compliance teams is growing faster than organisations can upskill. This shortage is leaving compliance teams overstretched, under-resourced and increasingly reliant on outdated tools and processes.

Therefore, in this changing environment, the question suddenly becomes how can institutions adapt?

Staying compliant in a shifting landscape

The pressure to adapt is real, but so is the opportunity. Institutions that reframe compliance as a proactive, technology-driven capability can build a more resilient and responsive foundation that’s now essential to staying ahead of regulatory change.

This begins with real-time visibility. As regulatory timelines change and expectations rise, institutions need systems that can surface compliance risks as they emerge, not weeks or months later. This means adopting tools that provide continuous monitoring, automated alerts and dynamic reporting.

But visibility alone isn’t enough. To act on insights effectively, institutions also need interoperability – the ability to unify data from across departments, jurisdictions and platforms. A modern compliance architecture must consolidate inputs from siloed systems into a unified case manager to support cross-regulatory reporting and governance. This not only improves accuracy and efficiency but also allows for faster, more coordinated responses to regulatory change.

To manage growing complexity at scale, many institutions are now turning to AI-powered compliance tools. Traditional rules-based systems often struggle to distinguish between suspicious and benign activity, leading to high false positive rates and operational inefficiencies. AI, by contrast, can learn from historical data to detect subtle anomalies, adapt to evolving fraud tactics and prioritise high-risk alerts with greater precision.

When layered with alert triage capabilities, AI can intelligently suppress low-value alerts and false positives, freeing up human investigators to focus on genuinely suspicious activity. At the more advanced stages, deep learning models can detect behavioural changes and suspicious network clusters, providing a multi-dimensional view of risk that static systems simply can’t match.

Of course, transparency and explainability in AI models are crucial. With regulations like the EU AI Act mandating interpretability in AI-driven decisions, institutions must make sure that every alert or action taken by an AI system is auditable and understandable. This includes clear justifications, visual tools such as link analysis, and detailed logs that support human oversight.

Alongside AI, automation continues to play a key role in modern compliance strategies. Automated sanction screening tools and watchlist screening, for example, help institutions maintain consistency and accuracy across jurisdictions, especially as global lists evolve in response to geopolitical events.

Similarly, customisable regulatory reporting tools, powered by automation, allow compliance teams to adapt to shifting requirements under various frameworks. One example is the upcoming enforcement of ISO 20022, which introduces a global standard for payment messaging. Its structured data format demands upgraded systems and more precise compliance screening, making automation and data interoperability more critical than ever.

This is particularly important in light of the ongoing talent shortages across the sector. With newer entrants still building the necessary expertise, automation and AI can help bridge the gap and allow teams to focus on complex tasks instead.

The future of compliance

As the regulatory compliance landscape becomes more fragmented, compliance can no longer be treated as a tick-box exercise. It must evolve into a dynamic, intelligence-led capability, one that allows institutions to respond to change, manage risk proactively and operate with confidence across jurisdictions.

To achieve this, institutions must rethink how compliance is structured, resourced and embedded into the fabric of financial operations. Those that do, and use the right tools in the process, will be better positioned to meet the demands of regulators today and in the future.

Continue Reading

Business

Why Shorter SSL/TLS Certificate Lifespans Are the Perfect Wake-Up Call for CIOs

By Tim Callan, Chief Compliance Officer at Sectigo and Vice-Chair of the CA/Browser Forum

Let’s be honest: AI has been the headline act this year. It’s the rockstar of boardroom conversations and LinkedIn thought leadership. But while AI commands the spotlight, quantum computing is quietly tuning its instruments backstage. And when it steps forward, it won’t be playing backup. For CIOs, the smart move isn’t just watching the main stage — it’s preparing proactively for the moment quantum takes center stage and rewrites the rules of data protection.


Quantum computing is no longer a distant science project. NIST has already published standards for quantum-resistant algorithms and set a clear deadline: RSA and ECC, the cryptographic algorithms that protect today’s data, must be deprecated by 2030. We’re no longer talking about “forecasts;” we are talking about actual directives from government organizations to implement change. And yet, many organizations are still treating this like a future problem. The reality is that threat actors aren’t waiting. They’re collecting encrypted data now, knowing they’ll be able to decrypt it later. If we wait until quantum machines are commercially viable, we’ll be too late. The time to prepare is before the clock runs out and, unfortunately, that clock is already ticking.

For CIOs, this is an infrastructure and risk management crisis in the making. If your organization’s cryptographic infrastructure isn’t agile enough to adapt, the integrity of your digital operations and the trust they rely on could very soon be compromised.

The Quantum Threat Is Already Here

Quantum computing’s potential to disrupt global systems and the data that runs through it is not hypothetical. Attackers are already engaging in “Harvest Now, Decrypt Later” (HNDL) strategies, intercepting encrypted data today with the intent to decrypt it once quantum capabilities mature.

Recent research found that an alarming 60% of organizations are very or extremely concerned about HNDL attacks, and 59% express similar concern about “Trust Now, Forge Later” threats, where adversaries steal digitally signed documents to forge them in the future.

Despite this awareness, only 14% of organizations have conducted a full assessment of systems vulnerable to quantum attacks. Nearly half (43%) of organizations are still in a “wait and see” mode. For CIOs, this gap highlights the need for leadership: it’s not
enough to know the risks exist, you must identify which systems, applications, and data flows will still be sensitive in ten or twenty years and prioritize them for PQC migration.

Crypto Agility Is a Data Leadership Imperative

Crypto agility (the ability to rapidly identify, manage, and replace cryptographic assets) is now a core competency for IT leaders to ensure business continuity, compliance, and trust. The most immediate pressure point is SSL/TLS certificates. These certificates authenticate digital identities and secure communications across data pipelines, APIs, and partner integrations.

The CA/Browser Forum has mandated a phased reduction in certificate lifespans from 398 days today to just 47 days by 2029. The first milestone arrives in March 2026, when certificates must be renewed every six months, shrinking to near-monthly by 2029.

For CIOs, it’s not just an operational housekeeping issue. Every expired or mismanaged certificate is a potential data outage. That means application downtimes, broken integration, failed transactions and compliance violations. With less than 1 in 5 organizations prepared for monthly renewals, and only 5% fully automating their certificate management processes currently, most enterprises face serious continuity and trust risks.

The upside? Preparing for shortened certificate lifespans directly supports quantum readiness. Ninety percent of organizations recognize the overlap between certificate agility and post-quantum cryptography preparedness. By investing in automation now, CIOs can ensure uninterrupted operations today while laying a scalable foundation for future-proof cryptographic governance.

The Strategic Imperative of PQC Migration

Migrating to quantum-safe algorithms is not a plug-and-play upgrade. It’s a full-scale transformation. Ninety-eight percent of organizations expect challenges, with top barriers including system complexity, lack of expertise, and cross-team coordination. Legacy systems (many with hardcoded cryptographic functions) make this even harder.

That’s why establishing a Center of Cryptographic Excellence (CryptoCOE) is a critical first step. A CryptoCOE centralizes governance, aligns stakeholders, and drives execution. According to Gartner, by 2028 organizations with a CryptoCOE will save 50% of costs in their PQC transition compared to those without.

For CIOs, this is a natural extension of your role. Cryptography touches every layer of enterprise infrastructure. A CryptoCOE ensures that cryptographic decisions are made with full visibility into system dependencies, risk profiles and regulatory obligations.

By championing crypto agility as an infrastructure priority, CIOs can transform PQC migration from a technical project into a strategic initiative that protects the organization’s most critical assets.

The Road Ahead

The shift to 47-day certificates is a wake-up call. It marks the end of static cryptography and the beginning of a dynamic, agile era. Organizations that embrace this change will not only avoid outages and compliance failures, but they’ll be also prepared for the quantum future.

Crypto agility is both a technical capability and a leadership mandate. For CIOs, the path forward to quantum-resistant infrastructure can be clear: invest in automation, build cross-functional alignment, and treat cryptographic governance as a core pillar of enterprise resilience.

Continue Reading

Copyright © 2021 Futures Parity.