Connect with us

Business

A bank’s ESG record depends on how its technology is built

Source: Finance Derivative

By Tony Coleman, CTO, Temenos

ESG (environmental, social, and corporate governance) has become mission-critical for banks, from meeting regulatory obligations to aligning with customer values to win market share. 

Many banks have turned to technology to manage their ESG position. But technology is not a panacea. It also presents a risk that banks fall short of their ESG targets. 

Technology that greens

Let’s look at the environmental pillar. Run on-premises or in a private datacentre, technology can be a big consumer of carbon. But deployed with the right infrastructure partners, it can enable banks to reduce their carbon footprint. Cloud is the best example of this. Banks that outsource their computing infrastructure to the public cloud hyperscalers can benefit from their economies of scale and energy efficient build principles. 

The geographical spread and scale of these datacentres allows for carbon-aware computing, which involves shifting compute to times and places where the carbon intensity of the grid results in lower carbon emissions. One study of Microsoft’s cloud infrastructure concluded its datacentres emit 98% less carbon than traditional enterprise IT sites. These hyperscalers have a focussed mindset and the deep pockets to match. The new Graviton3 processors that AWS is now installing in its public datacentres, which claims to use 60% less energy than the standard X86 models that have been in wide circulation, is an example of the progress that only a hyperscaler can achieve.

The green benefits ‘of the cloud’ are enhanced by software purposefully built to run ‘in the cloud’. Software vendors that are committed to decarbonising their solutions in the build phase pass those wins down the supply chain to banks. For example, the latest version of the Temenos Banking Cloud was built with a 12% improvement in carbon efficiency. How the software operates can have an even more profound benefit for banks. For example, banking software that runs ‘scale-to-zero’ protocols will automatically shut down or scale down availability according to demand for its service. This is one factor that has contributed to a 32% carbon efficiency improvement in the run time of the latest Temenos Banking Cloud release.

Collecting this evidence is not simply an internal tracking exercise. Regulations are reaching a point where publishing data against ESG targets will be legally mandated. In Europe the ECB and the Bank of England have launched climate risk stress tests to assess how prepared banks are for dealing with the shocks from climate risk. Meanwhile, initiatives like the UN-convened Net-Zero Banking Alliance (representing over 40% of global banking assets), the Glasgow Financial Alliance for Net Zero and ​​the Principles for Responsible Banking add to the clamour for banks to evidence their progress. Tracking ‘Scope 3 emissions’, which includes all indirect emissions that are not owned or controlled by the bank, is the next phase. Recognising this, Temenos has developed a carbon emissions calculator, which gives our customers deeper insight into carbon emissions data associated with their consumption of Temenos Banking Cloud services.

The same concept can be extended to a bank’s customers, with carbon calculators and automated offsetting schemes that help people build towards their personal environmental goals. Doing so brings a bank’s green credentials into the public sphere, turning environmental initiatives into commercial opportunity.

(Box-out)

Flowe, a cloud-enabled digital bank built on green principles, launched in June 2020. It is the first bank in Italy to be certified as a B-Corp and has been able to maintain its overall carbon footprint close to zero, saving 90.81% – 96.06% in MTCO2e emissions compared to the on-premise alternative. Within six months of launch, 600,000 mainly young Italians had become customers, at one point onboarding 19 new customers per second. This rapid launch and growth was only possible with the agility and scalability of cloud. Read more about this story.

Technology that reaches

Cloud also enables financial inclusion, a key tenet of ESG ambitions. Today, anyone with a mobile phone and internet connection can access banking services. With elastic scalability and software automation, banks have an almost limitless capacity to serve more customers. And they might not be where you think; 4.5% of US households (approximately 5.9 million) were “unbanked” in 2021. In the past, banks would have seen them as unprofitable targets. But as cloud and the associated automations cut go-to-market and operational costs, the commercial case for inclusion becomes stronger. 

Embedded finance gives banks another avenue of reach. Via simple APIs, banks can provide their solutions to non-financial businesses. This ready-made audience might otherwise take years to reach through a bank’s own marketing and sale channels. The embedded finance market is set to be worth $183 billion globally in 2027. That can be seen as a proxy of greater financial inclusion. 

AI offers another opportunity to improve financial inclusion. Armed with AI, banks can deliver highly personalised products and experiences for customers. People can be directed to the most appropriate investments, including funds that promote sustainability and loans made with a better understanding of the applicant’s ability to pay it back. ZestAI (previously Zest Finance), a leading provider of AI-powered credit underwriting, claims that banks using its software see a 20%- 30% increase in credit approval rates and a 30-40% reduction in defaults. 

But mismanaged, AI can have a dark side. If the data used to train them has bias, systems will perpetuate these discriminations. This can lead to unequal access to financial services and unjust or irresponsible credit decisions. In a study conducted by UC Berkeley, Latin and African-American borrowers were found to pay 7.9 and 3.6 basis points more in interest for home-purchase and refinance mortgages respectively, representing $765 million in extra interest per year. What’s more, AI algorithms are often complex and difficult to understand, so it is hard for customers to challenge decisions and for regulators to enforce compliance.

ESG by design

So how do banks reconcile the ESG benefits of technology with the risks? The answer is in how the technology is built; or more specifically, in the principle of ESG by design.  

ESG by design is the concept of incorporating environmental, social, and governance factors into new technology and software features from the outset. The desired outcome is that the solution’s architecture, functions and UX enable ESG optimisation. But it is enabled with a commitment that all decisions taken through the design and build phase are judged through the lens of ESG criteria and targets. 

At Temenos, ESG by design is a core principle to how we build technology. Let’s unpick what that means in practice, with some examples.

  • Shift-left is how we systematically embed ESG into our banking software services. It means estimating the potential carbon footprint of a new project from the start, and then working back to mitigate it at every stage. The same goes for usability, compliance, and other factors that impact ESG. Detecting and addressing issues earlier in the development process is more effective than taking remedial actions after the event, which risks both compromising the efficacy of the solution and increasing the cost and time of the development lifecycle. 
  • If there’s a choice to be made, banks should make it. Though ESG goals align with most bank’s commercial aspirations (i.e less carbon equals less cost, more choice and better experiences equals more customers) it is not binary. Banks will have varying appetites of commitment to ESG. Take scale-to-zero, which I referred to earlier. Limiting service availability and adding latency impacts the customer experience and regulatory SLAs, such as payment processing speeds. 

The optimum balance is not a call for us, as the technology vendor, to make. Instead we give banks the parameters and configurabilities to make the choice themself. This higher degree of control encourages banks to (a) use carbon-aware computing solutions, and (b) engage with the technology with more purpose.

  • Use technology to improve technology. Humans are fallible. AI is only as good as the people that program it. Their biases become the system’s biases. But the black box nature of many AI systems means that these biases go unnoticed. At Temenos we embed an explainable component to our AI tools (XAI). It allows us and our banking clients to understand how AI decisions have been made, and in doing so surfaces flaws that can be fixed. We extend this capability to a bank’s customers, allowing them to interrogate and challenge decisions.
  • The complex supply chains in technology makes ESG a collaborative effort. The work we do at Temenos to support banks with their ESG goals would be undermined if our partners didn’t share our same commitment. That means working with hyperscalers and partners in our ecosystem, and opening ourself up to third party validation. We did just that, using an independent carbon calculation platform (GoCodeGreen) to assess our carbon efficiency. I shared the evidence earlier; a 32% carbon efficiency improvement in the run time of the latest Temenos Cloud release, and a 12% improvement in build time. These are the sort of independently verified data points that banks should be asking their technological providers to submit. 

Collaboration also means being honest about what others can do better, and enabling their innovations. The Temenos Exchange has almost 120 vendors that are continually extending and improving our core solutions. These include Bud, an AI capability that drives highly personalised experiences for lending and money management; and Greenomy, that makes it easier for banks to capture sustainability data and report on it.

Conclusion

ESG by design is an holistic approach to all tenets of ESG: energy efficiency, financial inclusion, transparency and accountable governance. By working with technology partners that elevate ESG to a core design principle, banks can recognise a wide range of commercial opportunities and ensure compliance with evolving regulations. That should make ESG a core selection criteria of software vendors. Banks will want to find the evidence that their technology partners are as serious about ESG as they are; and that they have the design and build practices that bring these to life.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Why Resilience Is Replacing Prevention as the Defining Cybersecurity Strategy

by Manuel Sanchez, Information Security and Compliance Specialist, iManage

For decades, cybersecurity centered around prevention. Build the right walls around your perimeter, deploy the right tools, train your people not to click the wrong links, and you could keep the bad actors out.

Today, the question driving security strategy is no longer “how do we stop a breach?” but “how do we survive one?” It is a subtle but profound shift in philosophy, and it is reshaping everything from how IT and Security leaders structure their teams to how they select their vendors and deploy AI.

Rehearsing for the worst

The practical expression of this shift is visible in how security teams are being restructured. Organisations are establishing dedicated disaster recovery teams – not to prevent incidents, but to contain and recover from them when they occur. These teams maintain detailed, regularly updated playbooks covering everything from backup restoration to stakeholder communications, with roles pre-assigned and procedures rehearsed well in advance.

In many ways, this mirrors the logic behind disaster drills: fire alarms matter, but knowing the evacuation routes and the post-incident recovery plan determines how well an organisation survives. Critically, responsibility cannot rest with the CISO alone. Business continuity after a cyber incident is a whole-company challenge – which means every core part of the organisation is involved to sustain critical business operations.

Governance in the gray areas

Running alongside this shift is a governance crisis that is easy to underestimate until it becomes a serious risk. As organisations adopt more applications across more vendors and hosting services, the shared responsibility model that was supposed to keep cloud accountability clear has become increasingly difficult to enforce.

The sheer volume of cloud applications in use at any given enterprise is too vast for consistent governance under current approaches – and bad actors have become skilled at identifying exactly where vendor responsibility ends, and customer accountability begins, then operating precisely in that “gray area”. Being aware of this risk and putting preventative measures in place is important, but recognising the role these cloud applications play and the impact to key business operations if these applications were compromised, is critical.

Meanwhile, data volumes continue to grow exponentially, and unstructured data continues to accumulate in the background across many digital systems. Why is this important? If you don’t know what data you have, where it is stored, who has access to it, and, most importantly, how it is protected – onsite or cloud backup – this makes the recovery process a lot harder.

AI agents on the rise – and with it new risks

Although the focus of this article is on resilience, prevention must still remain an essential part of your defences. On that front, the accelerating adoption of autonomous AI in cyber defence tasks is reshaping security operations as visibly as anything else happening in the field right now. The volume, speed, and sophistication of modern threats have simply outpaced what human analysts can manage in real time.

The shift is toward AI that doesn’t just flag anomalies for human review, but actively detects, analyses, and neutralises threats as they emerge, even using predictive models to anticipate attacks before they fully materialise. This frees human experts to focus on strategic decisions and complex defence work rather than spending their days firefighting.

Autonomous AI does, however, introduce risks of its own. When AI agents operate across systems – accessing sensitive repositories, triggering actions, sharing data – they expand the attack surface in ways that aren’t always immediately visible.

Managing the digital identities of AI agents, much like managing employee access credentials, is becoming a critical security discipline. Accordingly, comprehensive traceability frameworks that log every action an agent takes are no longer optional; they are the foundation of responsible AI deployment in any security context.

The supply chain wake-up call

The case for moving from a “prevention” mindset to a “resilience” one is further bolstered by recent high-profile breaches via compromised managed service providers, which have forced a fundamental reset in how organisations evaluate their vendors.

The era of cost-first selection is over. Security credentials, demonstrated through continuous and verifiable evidence, are now non-negotiable for any provider hoping to retain enterprise clients – and what organisations are demanding goes well beyond point-in-time audits. They want real-time visibility into every third-party integration, every software update, and every vendor interaction – including the cloud services the vendors themselves use.

“Trust but verify” has become the operational standard, and providers who cannot demonstrate validated controls and live monitoring are finding themselves out of contention. It is a structural shift that will reshape the vendor landscape considerably — and it is already underway.

A new era demands a new approach

In the end, prevention still matters, but resilience – instilled via the key focus areas above – is what turns disruption into survivable events rather than existential crises. The organisations that are honest about the limits of prevention and embrace the shift towards resilience won’t just better withstand the next wave of attacks – they’ll be differentiating themselves from competitors still clinging to yesterday’s playbook.

Continue Reading

Business

Adapting compliance in a fragmented regulatory world

Rasha Abdel Jalil, Director of Financial Crime & Compliance at Eastnets, discusses the operational and strategic shifts needed to stay ahead of regulatory compliance in 2025 and beyond.

As we move through 2025, financial institutions face an unprecedented wave of regulatory change. From the EU’s Digital Operational Resilience Act (DORA) to the UK’s Basel 3.1 rollout and upcoming PSD3, the volume and velocity of new requirements are constantly reshaping how banks operate.

But it’s not just the sheer number of regulations that’s creating pressure. It’s the fragmentation and unpredictability. Jurisdictions are moving at different speeds, with overlapping deadlines and shifting expectations. Regulators are tightening controls, accelerating timelines and increasing penalties for non-compliance. And for financial compliance teams, it means navigating a landscape where the goalposts are constantly shifting.

Financial institutions must now strike a delicate balance: staying agile enough to respond to rapid regulatory shifts, while making sure their compliance frameworks are robust, scalable and future-ready.

The new regulatory compliance reality

By October of this year, financial institutions will have to navigate a dense cluster of regulatory compliance deadlines, each with its own scope, jurisdictional nuance and operational impact. From updated Common Reporting Standard (CRS) obligations, which applies to over 100 countries around the world, to Australia’s new Prudential Standard (CPS) 230 on operational risk, the scope of change is both global and granular.

Layered on top are sweeping EU regulations like the AI Act and the Instant Payments Regulation, the latter coming into force in October. These frameworks introduce new rules and redefine how institutions must manage data, risk and operational resilience, forcing financial compliance teams to juggle multiple reporting and governance requirements. A notable development is Verification of Payee (VOP), which adds a crucial layer of fraud protection for instant payments. This directly aligns with the regulator’s focus on instant payment security and compliance.

The result is a compliance environment that’s increasingly fragmented and unforgiving. In fact, 75% of compliance decision makers in Europe’s financial services sector agree that regulatory demands on their compliance teams have significantly increased over the past year. To put it simply, many are struggling to keep pace with regulatory change.

But why is it so difficult for teams to adapt?

The answer lies in a perfect storm of structural and operational challenges. In many organisations, compliance data is trapped in silos spread across departments, jurisdictions and legacy platforms. Traditional approaches – built around periodic reviews, static controls and manual processes – are no longer fit for purpose. Yet despite mounting pressure, many teams face internal resistance to changing established ways of working, which further slows progress and reinforces outdated models. Meanwhile, the pace of regulatory change continues to accelerate, customer expectations are rising and geopolitical uncertainty adds further complexity.

At the same time, institutions are facing a growing compliance talent gap. As regulatory expectations become more complex, the skills required to manage them are evolving. Yet many firms are struggling to find and retain professionals with the right mix of legal, technical and operational expertise. Experienced professionals are retiring en-masse, while nearly half of the new entrants lack the right experience needed to step into these roles effectively. And as AI tools become more central to investigative and decision-making processes, the need for technical fluency within compliance teams is growing faster than organisations can upskill. This shortage is leaving compliance teams overstretched, under-resourced and increasingly reliant on outdated tools and processes.

Therefore, in this changing environment, the question suddenly becomes how can institutions adapt?

Staying compliant in a shifting landscape

The pressure to adapt is real, but so is the opportunity. Institutions that reframe compliance as a proactive, technology-driven capability can build a more resilient and responsive foundation that’s now essential to staying ahead of regulatory change.

This begins with real-time visibility. As regulatory timelines change and expectations rise, institutions need systems that can surface compliance risks as they emerge, not weeks or months later. This means adopting tools that provide continuous monitoring, automated alerts and dynamic reporting.

But visibility alone isn’t enough. To act on insights effectively, institutions also need interoperability – the ability to unify data from across departments, jurisdictions and platforms. A modern compliance architecture must consolidate inputs from siloed systems into a unified case manager to support cross-regulatory reporting and governance. This not only improves accuracy and efficiency but also allows for faster, more coordinated responses to regulatory change.

To manage growing complexity at scale, many institutions are now turning to AI-powered compliance tools. Traditional rules-based systems often struggle to distinguish between suspicious and benign activity, leading to high false positive rates and operational inefficiencies. AI, by contrast, can learn from historical data to detect subtle anomalies, adapt to evolving fraud tactics and prioritise high-risk alerts with greater precision.

When layered with alert triage capabilities, AI can intelligently suppress low-value alerts and false positives, freeing up human investigators to focus on genuinely suspicious activity. At the more advanced stages, deep learning models can detect behavioural changes and suspicious network clusters, providing a multi-dimensional view of risk that static systems simply can’t match.

Of course, transparency and explainability in AI models are crucial. With regulations like the EU AI Act mandating interpretability in AI-driven decisions, institutions must make sure that every alert or action taken by an AI system is auditable and understandable. This includes clear justifications, visual tools such as link analysis, and detailed logs that support human oversight.

Alongside AI, automation continues to play a key role in modern compliance strategies. Automated sanction screening tools and watchlist screening, for example, help institutions maintain consistency and accuracy across jurisdictions, especially as global lists evolve in response to geopolitical events.

Similarly, customisable regulatory reporting tools, powered by automation, allow compliance teams to adapt to shifting requirements under various frameworks. One example is the upcoming enforcement of ISO 20022, which introduces a global standard for payment messaging. Its structured data format demands upgraded systems and more precise compliance screening, making automation and data interoperability more critical than ever.

This is particularly important in light of the ongoing talent shortages across the sector. With newer entrants still building the necessary expertise, automation and AI can help bridge the gap and allow teams to focus on complex tasks instead.

The future of compliance

As the regulatory compliance landscape becomes more fragmented, compliance can no longer be treated as a tick-box exercise. It must evolve into a dynamic, intelligence-led capability, one that allows institutions to respond to change, manage risk proactively and operate with confidence across jurisdictions.

To achieve this, institutions must rethink how compliance is structured, resourced and embedded into the fabric of financial operations. Those that do, and use the right tools in the process, will be better positioned to meet the demands of regulators today and in the future.

Continue Reading

Business

Why Shorter SSL/TLS Certificate Lifespans Are the Perfect Wake-Up Call for CIOs

By Tim Callan, Chief Compliance Officer at Sectigo and Vice-Chair of the CA/Browser Forum

Let’s be honest: AI has been the headline act this year. It’s the rockstar of boardroom conversations and LinkedIn thought leadership. But while AI commands the spotlight, quantum computing is quietly tuning its instruments backstage. And when it steps forward, it won’t be playing backup. For CIOs, the smart move isn’t just watching the main stage — it’s preparing proactively for the moment quantum takes center stage and rewrites the rules of data protection.


Quantum computing is no longer a distant science project. NIST has already published standards for quantum-resistant algorithms and set a clear deadline: RSA and ECC, the cryptographic algorithms that protect today’s data, must be deprecated by 2030. We’re no longer talking about “forecasts;” we are talking about actual directives from government organizations to implement change. And yet, many organizations are still treating this like a future problem. The reality is that threat actors aren’t waiting. They’re collecting encrypted data now, knowing they’ll be able to decrypt it later. If we wait until quantum machines are commercially viable, we’ll be too late. The time to prepare is before the clock runs out and, unfortunately, that clock is already ticking.

For CIOs, this is an infrastructure and risk management crisis in the making. If your organization’s cryptographic infrastructure isn’t agile enough to adapt, the integrity of your digital operations and the trust they rely on could very soon be compromised.

The Quantum Threat Is Already Here

Quantum computing’s potential to disrupt global systems and the data that runs through it is not hypothetical. Attackers are already engaging in “Harvest Now, Decrypt Later” (HNDL) strategies, intercepting encrypted data today with the intent to decrypt it once quantum capabilities mature.

Recent research found that an alarming 60% of organizations are very or extremely concerned about HNDL attacks, and 59% express similar concern about “Trust Now, Forge Later” threats, where adversaries steal digitally signed documents to forge them in the future.

Despite this awareness, only 14% of organizations have conducted a full assessment of systems vulnerable to quantum attacks. Nearly half (43%) of organizations are still in a “wait and see” mode. For CIOs, this gap highlights the need for leadership: it’s not
enough to know the risks exist, you must identify which systems, applications, and data flows will still be sensitive in ten or twenty years and prioritize them for PQC migration.

Crypto Agility Is a Data Leadership Imperative

Crypto agility (the ability to rapidly identify, manage, and replace cryptographic assets) is now a core competency for IT leaders to ensure business continuity, compliance, and trust. The most immediate pressure point is SSL/TLS certificates. These certificates authenticate digital identities and secure communications across data pipelines, APIs, and partner integrations.

The CA/Browser Forum has mandated a phased reduction in certificate lifespans from 398 days today to just 47 days by 2029. The first milestone arrives in March 2026, when certificates must be renewed every six months, shrinking to near-monthly by 2029.

For CIOs, it’s not just an operational housekeeping issue. Every expired or mismanaged certificate is a potential data outage. That means application downtimes, broken integration, failed transactions and compliance violations. With less than 1 in 5 organizations prepared for monthly renewals, and only 5% fully automating their certificate management processes currently, most enterprises face serious continuity and trust risks.

The upside? Preparing for shortened certificate lifespans directly supports quantum readiness. Ninety percent of organizations recognize the overlap between certificate agility and post-quantum cryptography preparedness. By investing in automation now, CIOs can ensure uninterrupted operations today while laying a scalable foundation for future-proof cryptographic governance.

The Strategic Imperative of PQC Migration

Migrating to quantum-safe algorithms is not a plug-and-play upgrade. It’s a full-scale transformation. Ninety-eight percent of organizations expect challenges, with top barriers including system complexity, lack of expertise, and cross-team coordination. Legacy systems (many with hardcoded cryptographic functions) make this even harder.

That’s why establishing a Center of Cryptographic Excellence (CryptoCOE) is a critical first step. A CryptoCOE centralizes governance, aligns stakeholders, and drives execution. According to Gartner, by 2028 organizations with a CryptoCOE will save 50% of costs in their PQC transition compared to those without.

For CIOs, this is a natural extension of your role. Cryptography touches every layer of enterprise infrastructure. A CryptoCOE ensures that cryptographic decisions are made with full visibility into system dependencies, risk profiles and regulatory obligations.

By championing crypto agility as an infrastructure priority, CIOs can transform PQC migration from a technical project into a strategic initiative that protects the organization’s most critical assets.

The Road Ahead

The shift to 47-day certificates is a wake-up call. It marks the end of static cryptography and the beginning of a dynamic, agile era. Organizations that embrace this change will not only avoid outages and compliance failures, but they’ll be also prepared for the quantum future.

Crypto agility is both a technical capability and a leadership mandate. For CIOs, the path forward to quantum-resistant infrastructure can be clear: invest in automation, build cross-functional alignment, and treat cryptographic governance as a core pillar of enterprise resilience.

Continue Reading

Copyright © 2021 Futures Parity.