Business
A bank’s ESG record depends on how its technology is built

Source: Finance Derivative
By Tony Coleman, CTO, Temenos
ESG (environmental, social, and corporate governance) has become mission-critical for banks, from meeting regulatory obligations to aligning with customer values to win market share.
Many banks have turned to technology to manage their ESG position. But technology is not a panacea. It also presents a risk that banks fall short of their ESG targets.
Technology that greens
Let’s look at the environmental pillar. Run on-premises or in a private datacentre, technology can be a big consumer of carbon. But deployed with the right infrastructure partners, it can enable banks to reduce their carbon footprint. Cloud is the best example of this. Banks that outsource their computing infrastructure to the public cloud hyperscalers can benefit from their economies of scale and energy efficient build principles.
The geographical spread and scale of these datacentres allows for carbon-aware computing, which involves shifting compute to times and places where the carbon intensity of the grid results in lower carbon emissions. One study of Microsoft’s cloud infrastructure concluded its datacentres emit 98% less carbon than traditional enterprise IT sites. These hyperscalers have a focussed mindset and the deep pockets to match. The new Graviton3 processors that AWS is now installing in its public datacentres, which claims to use 60% less energy than the standard X86 models that have been in wide circulation, is an example of the progress that only a hyperscaler can achieve.
The green benefits ‘of the cloud’ are enhanced by software purposefully built to run ‘in the cloud’. Software vendors that are committed to decarbonising their solutions in the build phase pass those wins down the supply chain to banks. For example, the latest version of the Temenos Banking Cloud was built with a 12% improvement in carbon efficiency. How the software operates can have an even more profound benefit for banks. For example, banking software that runs ‘scale-to-zero’ protocols will automatically shut down or scale down availability according to demand for its service. This is one factor that has contributed to a 32% carbon efficiency improvement in the run time of the latest Temenos Banking Cloud release.
Collecting this evidence is not simply an internal tracking exercise. Regulations are reaching a point where publishing data against ESG targets will be legally mandated. In Europe the ECB and the Bank of England have launched climate risk stress tests to assess how prepared banks are for dealing with the shocks from climate risk. Meanwhile, initiatives like the UN-convened Net-Zero Banking Alliance (representing over 40% of global banking assets), the Glasgow Financial Alliance for Net Zero and the Principles for Responsible Banking add to the clamour for banks to evidence their progress. Tracking ‘Scope 3 emissions’, which includes all indirect emissions that are not owned or controlled by the bank, is the next phase. Recognising this, Temenos has developed a carbon emissions calculator, which gives our customers deeper insight into carbon emissions data associated with their consumption of Temenos Banking Cloud services.
The same concept can be extended to a bank’s customers, with carbon calculators and automated offsetting schemes that help people build towards their personal environmental goals. Doing so brings a bank’s green credentials into the public sphere, turning environmental initiatives into commercial opportunity.
(Box-out)
Flowe, a cloud-enabled digital bank built on green principles, launched in June 2020. It is the first bank in Italy to be certified as a B-Corp and has been able to maintain its overall carbon footprint close to zero, saving 90.81% – 96.06% in MTCO2e emissions compared to the on-premise alternative. Within six months of launch, 600,000 mainly young Italians had become customers, at one point onboarding 19 new customers per second. This rapid launch and growth was only possible with the agility and scalability of cloud. Read more about this story.
Technology that reaches
Cloud also enables financial inclusion, a key tenet of ESG ambitions. Today, anyone with a mobile phone and internet connection can access banking services. With elastic scalability and software automation, banks have an almost limitless capacity to serve more customers. And they might not be where you think; 4.5% of US households (approximately 5.9 million) were “unbanked” in 2021. In the past, banks would have seen them as unprofitable targets. But as cloud and the associated automations cut go-to-market and operational costs, the commercial case for inclusion becomes stronger.
Embedded finance gives banks another avenue of reach. Via simple APIs, banks can provide their solutions to non-financial businesses. This ready-made audience might otherwise take years to reach through a bank’s own marketing and sale channels. The embedded finance market is set to be worth $183 billion globally in 2027. That can be seen as a proxy of greater financial inclusion.
AI offers another opportunity to improve financial inclusion. Armed with AI, banks can deliver highly personalised products and experiences for customers. People can be directed to the most appropriate investments, including funds that promote sustainability and loans made with a better understanding of the applicant’s ability to pay it back. ZestAI (previously Zest Finance), a leading provider of AI-powered credit underwriting, claims that banks using its software see a 20%- 30% increase in credit approval rates and a 30-40% reduction in defaults.
But mismanaged, AI can have a dark side. If the data used to train them has bias, systems will perpetuate these discriminations. This can lead to unequal access to financial services and unjust or irresponsible credit decisions. In a study conducted by UC Berkeley, Latin and African-American borrowers were found to pay 7.9 and 3.6 basis points more in interest for home-purchase and refinance mortgages respectively, representing $765 million in extra interest per year. What’s more, AI algorithms are often complex and difficult to understand, so it is hard for customers to challenge decisions and for regulators to enforce compliance.
ESG by design
So how do banks reconcile the ESG benefits of technology with the risks? The answer is in how the technology is built; or more specifically, in the principle of ESG by design.
ESG by design is the concept of incorporating environmental, social, and governance factors into new technology and software features from the outset. The desired outcome is that the solution’s architecture, functions and UX enable ESG optimisation. But it is enabled with a commitment that all decisions taken through the design and build phase are judged through the lens of ESG criteria and targets.
At Temenos, ESG by design is a core principle to how we build technology. Let’s unpick what that means in practice, with some examples.
- Shift-left is how we systematically embed ESG into our banking software services. It means estimating the potential carbon footprint of a new project from the start, and then working back to mitigate it at every stage. The same goes for usability, compliance, and other factors that impact ESG. Detecting and addressing issues earlier in the development process is more effective than taking remedial actions after the event, which risks both compromising the efficacy of the solution and increasing the cost and time of the development lifecycle.
- If there’s a choice to be made, banks should make it. Though ESG goals align with most bank’s commercial aspirations (i.e less carbon equals less cost, more choice and better experiences equals more customers) it is not binary. Banks will have varying appetites of commitment to ESG. Take scale-to-zero, which I referred to earlier. Limiting service availability and adding latency impacts the customer experience and regulatory SLAs, such as payment processing speeds.
The optimum balance is not a call for us, as the technology vendor, to make. Instead we give banks the parameters and configurabilities to make the choice themself. This higher degree of control encourages banks to (a) use carbon-aware computing solutions, and (b) engage with the technology with more purpose.
- Use technology to improve technology. Humans are fallible. AI is only as good as the people that program it. Their biases become the system’s biases. But the black box nature of many AI systems means that these biases go unnoticed. At Temenos we embed an explainable component to our AI tools (XAI). It allows us and our banking clients to understand how AI decisions have been made, and in doing so surfaces flaws that can be fixed. We extend this capability to a bank’s customers, allowing them to interrogate and challenge decisions.
- The complex supply chains in technology makes ESG a collaborative effort. The work we do at Temenos to support banks with their ESG goals would be undermined if our partners didn’t share our same commitment. That means working with hyperscalers and partners in our ecosystem, and opening ourself up to third party validation. We did just that, using an independent carbon calculation platform (GoCodeGreen) to assess our carbon efficiency. I shared the evidence earlier; a 32% carbon efficiency improvement in the run time of the latest Temenos Cloud release, and a 12% improvement in build time. These are the sort of independently verified data points that banks should be asking their technological providers to submit.
Collaboration also means being honest about what others can do better, and enabling their innovations. The Temenos Exchange has almost 120 vendors that are continually extending and improving our core solutions. These include Bud, an AI capability that drives highly personalised experiences for lending and money management; and Greenomy, that makes it easier for banks to capture sustainability data and report on it.
Conclusion
ESG by design is an holistic approach to all tenets of ESG: energy efficiency, financial inclusion, transparency and accountable governance. By working with technology partners that elevate ESG to a core design principle, banks can recognise a wide range of commercial opportunities and ensure compliance with evolving regulations. That should make ESG a core selection criteria of software vendors. Banks will want to find the evidence that their technology partners are as serious about ESG as they are; and that they have the design and build practices that bring these to life.
You may like
Business
Adapting compliance in a fragmented regulatory world

Rasha Abdel Jalil, Director of Financial Crime & Compliance at Eastnets, discusses the operational and strategic shifts needed to stay ahead of regulatory compliance in 2025 and beyond.
As we move through 2025, financial institutions face an unprecedented wave of regulatory change. From the EU’s Digital Operational Resilience Act (DORA) to the UK’s Basel 3.1 rollout and upcoming PSD3, the volume and velocity of new requirements are constantly reshaping how banks operate.
But it’s not just the sheer number of regulations that’s creating pressure. It’s the fragmentation and unpredictability. Jurisdictions are moving at different speeds, with overlapping deadlines and shifting expectations. Regulators are tightening controls, accelerating timelines and increasing penalties for non-compliance. And for financial compliance teams, it means navigating a landscape where the goalposts are constantly shifting.
Financial institutions must now strike a delicate balance: staying agile enough to respond to rapid regulatory shifts, while making sure their compliance frameworks are robust, scalable and future-ready.
The new regulatory compliance reality
By October of this year, financial institutions will have to navigate a dense cluster of regulatory compliance deadlines, each with its own scope, jurisdictional nuance and operational impact. From updated Common Reporting Standard (CRS) obligations, which applies to over 100 countries around the world, to Australia’s new Prudential Standard (CPS) 230 on operational risk, the scope of change is both global and granular.
Layered on top are sweeping EU regulations like the AI Act and the Instant Payments Regulation, the latter coming into force in October. These frameworks introduce new rules and redefine how institutions must manage data, risk and operational resilience, forcing financial compliance teams to juggle multiple reporting and governance requirements. A notable development is Verification of Payee (VOP), which adds a crucial layer of fraud protection for instant payments. This directly aligns with the regulator’s focus on instant payment security and compliance.
The result is a compliance environment that’s increasingly fragmented and unforgiving. In fact, 75% of compliance decision makers in Europe’s financial services sector agree that regulatory demands on their compliance teams have significantly increased over the past year. To put it simply, many are struggling to keep pace with regulatory change.
But why is it so difficult for teams to adapt?
The answer lies in a perfect storm of structural and operational challenges. In many organisations, compliance data is trapped in silos spread across departments, jurisdictions and legacy platforms. Traditional approaches – built around periodic reviews, static controls and manual processes – are no longer fit for purpose. Yet despite mounting pressure, many teams face internal resistance to changing established ways of working, which further slows progress and reinforces outdated models. Meanwhile, the pace of regulatory change continues to accelerate, customer expectations are rising and geopolitical uncertainty adds further complexity.
At the same time, institutions are facing a growing compliance talent gap. As regulatory expectations become more complex, the skills required to manage them are evolving. Yet many firms are struggling to find and retain professionals with the right mix of legal, technical and operational expertise. Experienced professionals are retiring en-masse, while nearly half of the new entrants lack the right experience needed to step into these roles effectively. And as AI tools become more central to investigative and decision-making processes, the need for technical fluency within compliance teams is growing faster than organisations can upskill. This shortage is leaving compliance teams overstretched, under-resourced and increasingly reliant on outdated tools and processes.
Therefore, in this changing environment, the question suddenly becomes how can institutions adapt?
Staying compliant in a shifting landscape
The pressure to adapt is real, but so is the opportunity. Institutions that reframe compliance as a proactive, technology-driven capability can build a more resilient and responsive foundation that’s now essential to staying ahead of regulatory change.
This begins with real-time visibility. As regulatory timelines change and expectations rise, institutions need systems that can surface compliance risks as they emerge, not weeks or months later. This means adopting tools that provide continuous monitoring, automated alerts and dynamic reporting.
But visibility alone isn’t enough. To act on insights effectively, institutions also need interoperability – the ability to unify data from across departments, jurisdictions and platforms. A modern compliance architecture must consolidate inputs from siloed systems into a unified case manager to support cross-regulatory reporting and governance. This not only improves accuracy and efficiency but also allows for faster, more coordinated responses to regulatory change.
To manage growing complexity at scale, many institutions are now turning to AI-powered compliance tools. Traditional rules-based systems often struggle to distinguish between suspicious and benign activity, leading to high false positive rates and operational inefficiencies. AI, by contrast, can learn from historical data to detect subtle anomalies, adapt to evolving fraud tactics and prioritise high-risk alerts with greater precision.
When layered with alert triage capabilities, AI can intelligently suppress low-value alerts and false positives, freeing up human investigators to focus on genuinely suspicious activity. At the more advanced stages, deep learning models can detect behavioural changes and suspicious network clusters, providing a multi-dimensional view of risk that static systems simply can’t match.
Of course, transparency and explainability in AI models are crucial. With regulations like the EU AI Act mandating interpretability in AI-driven decisions, institutions must make sure that every alert or action taken by an AI system is auditable and understandable. This includes clear justifications, visual tools such as link analysis, and detailed logs that support human oversight.
Alongside AI, automation continues to play a key role in modern compliance strategies. Automated sanction screening tools and watchlist screening, for example, help institutions maintain consistency and accuracy across jurisdictions, especially as global lists evolve in response to geopolitical events.
Similarly, customisable regulatory reporting tools, powered by automation, allow compliance teams to adapt to shifting requirements under various frameworks. One example is the upcoming enforcement of ISO 20022, which introduces a global standard for payment messaging. Its structured data format demands upgraded systems and more precise compliance screening, making automation and data interoperability more critical than ever.
This is particularly important in light of the ongoing talent shortages across the sector. With newer entrants still building the necessary expertise, automation and AI can help bridge the gap and allow teams to focus on complex tasks instead.
The future of compliance
As the regulatory compliance landscape becomes more fragmented, compliance can no longer be treated as a tick-box exercise. It must evolve into a dynamic, intelligence-led capability, one that allows institutions to respond to change, manage risk proactively and operate with confidence across jurisdictions.
To achieve this, institutions must rethink how compliance is structured, resourced and embedded into the fabric of financial operations. Those that do, and use the right tools in the process, will be better positioned to meet the demands of regulators today and in the future.
Business
Why Shorter SSL/TLS Certificate Lifespans Are the Perfect Wake-Up Call for CIOs

By Tim Callan, Chief Compliance Officer at Sectigo and Vice-Chair of the CA/Browser Forum
Let’s be honest: AI has been the headline act this year. It’s the rockstar of boardroom conversations and LinkedIn thought leadership. But while AI commands the spotlight, quantum computing is quietly tuning its instruments backstage. And when it steps forward, it won’t be playing backup. For CIOs, the smart move isn’t just watching the main stage — it’s preparing proactively for the moment quantum takes center stage and rewrites the rules of data protection.
Quantum computing is no longer a distant science project. NIST has already published standards for quantum-resistant algorithms and set a clear deadline: RSA and ECC, the cryptographic algorithms that protect today’s data, must be deprecated by 2030. We’re no longer talking about “forecasts;” we are talking about actual directives from government organizations to implement change. And yet, many organizations are still treating this like a future problem. The reality is that threat actors aren’t waiting. They’re collecting encrypted data now, knowing they’ll be able to decrypt it later. If we wait until quantum machines are commercially viable, we’ll be too late. The time to prepare is before the clock runs out and, unfortunately, that clock is already ticking.
For CIOs, this is an infrastructure and risk management crisis in the making. If your organization’s cryptographic infrastructure isn’t agile enough to adapt, the integrity of your digital operations and the trust they rely on could very soon be compromised.
The Quantum Threat Is Already Here
Quantum computing’s potential to disrupt global systems and the data that runs through it is not hypothetical. Attackers are already engaging in “Harvest Now, Decrypt Later” (HNDL) strategies, intercepting encrypted data today with the intent to decrypt it once quantum capabilities mature.
Recent research found that an alarming 60% of organizations are very or extremely concerned about HNDL attacks, and 59% express similar concern about “Trust Now, Forge Later” threats, where adversaries steal digitally signed documents to forge them in the future.
Despite this awareness, only 14% of organizations have conducted a full assessment of systems vulnerable to quantum attacks. Nearly half (43%) of organizations are still in a “wait and see” mode. For CIOs, this gap highlights the need for leadership: it’s not
enough to know the risks exist, you must identify which systems, applications, and data flows will still be sensitive in ten or twenty years and prioritize them for PQC migration.
Crypto Agility Is a Data Leadership Imperative
Crypto agility (the ability to rapidly identify, manage, and replace cryptographic assets) is now a core competency for IT leaders to ensure business continuity, compliance, and trust. The most immediate pressure point is SSL/TLS certificates. These certificates authenticate digital identities and secure communications across data pipelines, APIs, and partner integrations.
The CA/Browser Forum has mandated a phased reduction in certificate lifespans from 398 days today to just 47 days by 2029. The first milestone arrives in March 2026, when certificates must be renewed every six months, shrinking to near-monthly by 2029.
For CIOs, it’s not just an operational housekeeping issue. Every expired or mismanaged certificate is a potential data outage. That means application downtimes, broken integration, failed transactions and compliance violations. With less than 1 in 5 organizations prepared for monthly renewals, and only 5% fully automating their certificate management processes currently, most enterprises face serious continuity and trust risks.
The upside? Preparing for shortened certificate lifespans directly supports quantum readiness. Ninety percent of organizations recognize the overlap between certificate agility and post-quantum cryptography preparedness. By investing in automation now, CIOs can ensure uninterrupted operations today while laying a scalable foundation for future-proof cryptographic governance.
The Strategic Imperative of PQC Migration
Migrating to quantum-safe algorithms is not a plug-and-play upgrade. It’s a full-scale transformation. Ninety-eight percent of organizations expect challenges, with top barriers including system complexity, lack of expertise, and cross-team coordination. Legacy systems (many with hardcoded cryptographic functions) make this even harder.
That’s why establishing a Center of Cryptographic Excellence (CryptoCOE) is a critical first step. A CryptoCOE centralizes governance, aligns stakeholders, and drives execution. According to Gartner, by 2028 organizations with a CryptoCOE will save 50% of costs in their PQC transition compared to those without.
For CIOs, this is a natural extension of your role. Cryptography touches every layer of enterprise infrastructure. A CryptoCOE ensures that cryptographic decisions are made with full visibility into system dependencies, risk profiles and regulatory obligations.
By championing crypto agility as an infrastructure priority, CIOs can transform PQC migration from a technical project into a strategic initiative that protects the organization’s most critical assets.
The Road Ahead
The shift to 47-day certificates is a wake-up call. It marks the end of static cryptography and the beginning of a dynamic, agile era. Organizations that embrace this change will not only avoid outages and compliance failures, but they’ll be also prepared for the quantum future.
Crypto agility is both a technical capability and a leadership mandate. For CIOs, the path forward to quantum-resistant infrastructure can be clear: invest in automation, build cross-functional alignment, and treat cryptographic governance as a core pillar of enterprise resilience.
Business
The Security Talent Gap is a Red Herring: It’s Really an Automation and Context Gap

by Tom Gol, Senior Product Manager Armis
We constantly hear about a cybersecurity staffing crisis, but perhaps the real challenge isn’t a lack of people. It might just be a critical shortage of intelligent automation and actionable context for the talented teams we already have.
The Lingering Shadow of the “Talent Gap” Narrative
It’s almost a mantra in cybersecurity circles: “There’s a massive talent gap!” Conferences echo it, reports reinforce it, and CISOs often feel it acutely. This widely accepted idea suggests we simply don’t have enough skilled professionals, leading to overworked teams, burnout, and, most critically, persistent organizational risk. The default response often becomes a relentless cycle of “buy more tools, tune more tools, and staff more teams”—a cycle that feels increasingly unsustainable and inefficient.
But what if this pervasive “talent gap” is actually a clever red herring, distracting us from a more fundamental issue? We’ve grown so accustomed to the narrative of a human deficit that we often overlook a crucial truth: current technology is already capable of significantly narrowing this very gap. My strong conviction is this: the true underlying problem isn’t a shortage of available talent, but a profound and crippling gap in intelligent automation and actionable context that prevents our existing cybersecurity professionals from operating at their full potential. What’s more, advancing on the technology side now presents a demonstrably better return on investment than simply trying to out-hire the problem. Fill that gap with smarter tech, and watch the perceived talent shortage shrink.
Misdiagnosis: When More People Isn’t the Answer
For too long, the cybersecurity industry’s knee-jerk reaction to mounting threats has been to throw more human resources at the problem. Yet, the attack surface continues its relentless expansion. Threat actors become more sophisticated. And our SOCs are constantly drowning in an unfiltered deluge of alerts. This creates an overwhelming workload that even the most seasoned experts find impossible to manage effectively, often resulting in burnout and, ironically, talent attrition rather than retention.
The issue isn’t that a lack of bright minds are joining the field. It’s that those brilliant minds often find themselves mired in monotonous, low-value tasks. They’re forced to operate in a thick fog of incomplete information, constantly sifting through noise. When security teams lack clarity on exactly what assets they own, how those assets connect, what their true business criticality is, and which threats are genuinely active, even the most experienced professional struggles. Their effectiveness diminishes, not from a lack of inherent skill, but from a fundamental absence of visibility and intelligent support.
Automation and AI: The True Force Multiplier for Human Talent
The real power move against the overwhelming tide of cyber threats lies not in endless recruitment, but in the intelligent application of automation and AI. Leading industry discussions increasingly highlight that the purpose of AI in cybersecurity isn’t about wholesale human replacement. Instead, it’s about augmenting our existing staff, turning them into a far more potent force. This approach fundamentally allows organizations to scale their expertise and impact without being shackled to proportional headcount increases. Let’s unpack how this transformation plays out.
Freeing Up Human Capital from the Mundane
Imagine a security analyst whose day is consumed by hours of manual investigation, enriching alerts, triaging false positives, responding to routine questionnaires, or laboriously transitioning tickets. These are precisely the kinds of non-human, deterministic, and highly repetitive tasks ripe for intelligent automation. AI agents can seamlessly take on this soul-crushing burden, liberating human analysts. They are then free to pivot towards higher-value, creative, judgment-based, and genuinely strategic work. This transforms security teams from reactive task-runners into proactive problem-solvers. Projections suggest that common SOC tasks could become significantly more cost-efficient in the coming years due to automation—a shift that’s not merely about saving money, but about amplifying human potential.
Supercharging Productivity and Experience
Modern AI, particularly multi-agent AI and generative AI, can proactively offer smart advice on configurations, predict the root causes of complex issues, and integrate effortlessly with existing automated frameworks. This empowers security professionals, making their work not just more efficient but also more engaging and less prone to drudgery.
The Indispensable Power of Context: Lowering the “Expertise Bar”
While automation tackles the sheer volume of work, context provides the vital clarity that fundamentally reduces the need for constant, deep-seated expertise in every single scenario. When security professionals have immediate, rich, and actionable context about a vulnerability or an emerging threat, the path to intelligent prioritization and decisive action becomes remarkably clearer.
Consider the profound difference this context makes:
- Asset Context: Knowing not just that a vulnerability exists, but precisely which specific device it resides on—is it a critical production server, or an isolated, deprecated test machine?
- Business Application Context: Understanding the exact business function tied to that asset, and the tangible financial or operational impact if it were to be compromised.
- Network Context: Seeing the asset’s intricate network connections, its precise exposure level, and every potential path an attacker could take for lateral movement.
- Compensating Controls Context: Having a clear, real-time picture of which existing security controls (like network segmentation, EDRs, or Intrusion Prevention Systems) are actually in place and effectively working to mitigate the vulnerability’s risk.
- Threat Intelligence Context: Possessing real-time, “active exploit” intelligence that doesn’t just theorize, but tells you if a vulnerability is actively being exploited in the wild, or is part of a known attack campaign targeting your industry.
With this deep, multidimensional context, a significant portion of the exposure management workload can be automated. Crucially, for the tasks that still require human intervention, the “expertise bar” is dramatically lowered. My take is that for a vast majority of cases—perhaps 90% of scenarios—a security professional who isn’t a battle-hardened, 20-year veteran can still make incredibly effective decisions and significantly improve an organization’s cyber posture. This is because they are presented with clear, actionable context that naturally guides prioritization and even recommends precise actions. The result? A drastic reduction in alert noise, faster detection and response times, and a palpable easing of the burden on the entire security team.
Navigating the Human Element: Skills Evolution and Burnout
This powerful shift towards automation and AI naturally brings legitimate questions about skills erosion. Some experts prudently point out a valid risk: a significant portion of SOC teams might experience a regression in foundational analysis skills due to an over-reliance on automation. This underscores a critical truth: we must keep humans firmly in the loop. For highly autonomous SOCs, a “human-on-the-loop” approach is recommended, reserving human intervention for complex edge cases and critical exceptions.
CISOs, therefore, face an evolving mandate:
- Future-Proofing Skills: It’s less about filling historical roles and more about nurturing new competencies like prompt engineering, sophisticated AI oversight, advanced critical thinking, and strategic problem-solving.
- Combating Burnout: Beyond just tools, effective talent retention demands proactive measures to address burnout. This includes intelligent workload monitoring, smart task delegation, and genuine wellness initiatives. The ultimate goal isn’t just to fill empty seats; it’s to ensure that the people in those seats are effective, sustainable, and thriving.
A New Mindset for CISOs: Embracing the “Chief Innovation Security Officer” Role
The ongoing “talent gap” discussion should be a catalyst for CISOs to adopt a fundamentally new mindset. Instead of simply focusing on cost-cutting or the perpetual struggle of recruitment, they must evolve into “Chief Innovation Security Officers.” This means daring to rethink how work gets done, leveraging AI and automation not merely as tactical tools but as strategic enablers for scaling cybersecurity capabilities and unlocking the full potential of their existing talent. This strategic investment in technology, driven by an understanding of context, offers a superior ROI in bridging the cybersecurity “gap” compared to the increasingly futile effort to simply hire more people.
Building robust AI governance frameworks and achieving crystal-clear visibility into existing AI implementations and technical debt are crucial foundational steps. Ultimately, solving the perceived talent gap isn’t about endlessly hiring more people into an unsustainable system. It’s about empowering the talented individuals we do have—making them more efficient, more effective, and more strategically focused—through the intelligent application of automation and unparalleled context. It’s time to stop chasing a phantom gap and start truly empowering our digital defenders.

Adapting compliance in a fragmented regulatory world

Why Shorter SSL/TLS Certificate Lifespans Are the Perfect Wake-Up Call for CIOs

The Security Talent Gap is a Red Herring: It’s Really an Automation and Context Gap

How 5G and AI are shaping the future of eHealth

Combating Cyber Fraud in the Aviation Industry
