Business
A New Generation of Nuclear Reactors Could Hold the Key to a Green Future

Source: Time
On a conference-room whiteboard in the heart of Silicon Valley, Jacob DeWitte sketches his startup’s first product. In red marker, it looks like a beer can in a Koozie, stuck with a crazy straw. In real life, it will be about the size of a hot tub, and made from an array of exotic materials, like zirconium and uranium. Under carefully controlled conditions, they will interact to produce heat, which in turn will make electricity—1.5 megawatts’ worth, enough to power a neighborhood or a factory. DeWitte’s little power plant will run for a decade without refueling and, amazingly, will emit no carbon. ”It’s a metallic thermal battery,” he says, coyly. But more often DeWitte calls it by another name: a nuclear reactor.
Fission isn’t for the faint of heart. Building a working reactor—even a very small one—requires precise and painstaking efforts of both engineering and paper pushing. Regulations are understandably exhaustive. Fuel is hard to come by—they don’t sell uranium at the Gas-N-Sip. But DeWitte plans to flip the switch on his first reactor around 2023, a mere decade after co-founding his company, Oklo. After that, they want to do for neighborhood nukes what Tesla has done for electric cars: use a niche and expensive first version as a stepping stone toward cheaper, bigger, higher-volume products. In Oklo’s case, that means starting with a “microreactor” designed for remote communities, like Alaskan villages, currently dependent on diesel fuel trucked, barged or even flown in, at an exorbitant expense. Then building more and incrementally larger reactors until their zero-carbon energy source might meaningfully contribute to the global effort to reduce fossil-fuel emissions.
At global climate summits, in the corridors of Congress and at statehouses around the U.S., nuclear power has become the contentious keystone of carbon reduction plans. Everyone knows they need it. But no one is really sure they want it, given its history of accidents. Or even if they can get it in time to reach urgent climate goals, given how long it takes to build. Oklo is one of a growing handful of companies working to solve those problems by putting reactors inside safer, easier-to-build and smaller packages. None of them are quite ready to scale to market-level production, but given the investments being made into the technology right now, along with an increasing realization that we won’t be able to shift away from fossil fuels without nuclear power, it’s a good bet that at least one of them becomes a game changer.
If existing plants are the energy equivalent of a 2-liter soda bottle, with giant, 1,000-megawatt-plus reactors, Oklo’s strategy is to make reactors by the can. The per-megawatt construction costs might be higher, at least at first. But producing units in a factory would give the company a chance to improve its processes and to lower costs. Oklo would pioneer a new model. Nuclear plants need no longer be bet-the-company big, even for giant utilities. Venture capitalists can get behind the potential to scale to a global market. And climate hawks should fawn over a zero-carbon energy option that complements burgeoning supplies of wind and solar power. Unlike today’s plants, which run most efficiently at full blast, making it challenging for them to adapt to a grid increasingly powered by variable sources (not every day is sunny, or windy), the next generation of nuclear technology wants to be more flexible, able to respond quickly to ups and downs in supply and demand.
Engineering these innovations is hard. Oklo’s 30 employees are busy untangling the knots of safety and complexity that sent the cost of building nuclear plants to the stratosphere and all but halted their construction in the U.S. ”If this technology was brand-‘new’—like if fission was a recent breakthrough out of a lab, 10 or 15 years ago—we’d be talking about building our 30th reactor,” DeWitte says.
But fission is an old, and fraught, technology, and utility companies are scrambling now to keep their existing gargantuan nuclear plants open. Economically, they struggle to compete with cheap natural gas, along with wind and solar, often subsidized by governments. Yet climate-focused nations like France and the U.K. that had planned to phase out nuclear are instead doubling down. (In October, French President Emmanuel Macron backed off plans to close 14 reactors, and in November, he announced the country would instead start building new ones.) At the U.N. climate summit in Glasgow, the U.S. announced its support for Poland, Kenya, Ukraine, Brazil, Romania and Indonesia to develop their own new nuclear plants—while European negotiators assured that nuclear energy counts as “green.” All the while, Democrats and Republicans are (to everyone’s surprise) often aligned on nuclear’s benefits—and, in many cases, putting their powers of the purse behind it, both to keep old plants open in the U.S. and speed up new technologies domestically and overseas.
It makes for a decidedly odd moment in the life of a technology that already altered the course of one century, and now wants to make a difference in another. There are 93 operating nuclear reactors in the U.S.; combined, they supply 20% of U.S. electricity, and 50% of its carbon-free electricity. Nuclear should be a climate solution, satisfying both technical and economic needs. But while the existing plants finally operate with enviable efficiency (after 40 years of working out the kinks), the next generation of designs is still a decade away from being more than a niche player in our energy supply. Everyone wants a steady supply of electricity, without relying on coal. Nuclear is paradoxically right at hand, and out of reach.
For that to change, “new nuclear” has to emerge before the old nuclear plants recede. It has to keep pace with technological improvements in other realms, like long-term energy storage, where each incremental improvement increases the potential for renewables to supply more of our electricity. It has to be cheaper than carbon-capture technologies, which would allow flexible gas plants to operate without climate impacts (but are still too expensive to build at scale). And finally it has to arrive before we give up—before the spectre of climate catastrophe creates a collective “doomerism,” and we stop trying to change.
Not everyone thinks nuclear can reinvent itself in time. “When it comes to averting the imminent effects of climate change, even the cutting edge of nuclear technology will prove to be too little, too late,” predicts Allison Macfarlane, former chair of the U.S. Nuclear Regulatory Commission (NRC)—the government agency singularly responsible for permitting new plants. Can a stable, safe, known source of energy rise to the occasion, or will nuclear be cast aside as too expensive, too risky and too late?
Trying Again
Nuclear began in a rush. In 1942, in the lowest mire of World War II, the U.S. began the Manhattan Project, the vast effort to develop atomic weapons. It employed 130,000 people at secret sites across the country, the most famous of which was Los Alamos Laboratory, near Albuquerque, N.M., where Robert Oppenheimer led the design and construction of the first atomic bombs. DeWitte, 36, grew up nearby. Even as a child of the ’90s, he was steeped in the state’s nuclear history, and preoccupied with the terrifying success of its engineering and the power of its materials. “It’s so incredibly energy dense,” says DeWitte. “A golf ball of uranium would power your entire life!”
DeWitte has taken that bromide almost literally. He co-founded Oklo in 2013 with Caroline Cochran, while both were graduate students in nuclear engineering at the Massachusetts Institute of Technology. When they arrived in Cambridge, Mass., in 2007 and 2008, the nuclear industry was on a precipice. Then presidential candidate Barack Obama espoused a new eagerness to address climate change by reducing carbon emissions—which at the time meant less coal, and more nuclear. (Wind and solar energy were still a blip.) It was an easy sell. In competitive power markets, nuclear plants were profitable. The 104 operating reactors in the U.S. at the time were running smoothly. There hadn’t been a major accident since Chernobyl, in 1986.
The industry excitedly prepared for a “nuclear renaissance.” At the peak of interest, the NRC had applications for 30 new reactors in the U.S. Only two would be built. The cheap natural gas of the fracking boom began to drive down electricity prices, razing nuclear’s profits. Newly subsidized renewables, like wind and solar, added even more electricity generation, further saturating the markets. When on March 11, 2011, an earthquake and subsequent tsunami rolled over Japan’s Fukushima Daiichi nuclear power plant, leading to the meltdown of all three of its reactors and the evacuation of 154,000 people, the industry’s coffin was fully nailed. Not only would there be no renaissance in the U.S, but the existing plants had to justify their safety. Japan shut down 46 of its 50 operating reactors. Germany closed 11 of its 17. The U.S. fleet held on politically, but struggled to compete economically. Since Fukushima, 12 U.S. reactors have begun decommissioning, with three more planned.
At MIT, Cochran and DeWitte—who were teaching assistants together for a nuclear reactor class in 2009, and married in 2011—were frustrated by the setback. ”It was like, There’re all these cool technologies out there. Let’s do something with it,” says Cochran. But the nuclear industry has never been an easy place for innovators. In the U.S., its operational ranks have long been dominated by “ring knockers”—the officer corps of the Navy’s nuclear fleet, properly trained in the way things are done, but less interested in doing them differently. Governments had always kept a tight grip on nuclear; for decades, the technology was under shrouds. The personal computing revolution, and then the wild rise of the Internet, further drained engineering talent. From DeWitte and Cochran’s perspective, the nuclear-energy industry had already ossified by the time Fukushima and fracking totally brought things to a halt. “You eventually got to the point where it’s like, we have to try something different,” DeWitte says.
He and Cochran began to discreetly convene their MIT classmates for brainstorming sessions. Nuclear folks tend to be dogmatic about their favorite method of splitting atoms, but they stayed agnostic. “I didn’t start thinking we had to do everything differently,” says DeWitte. Rather, they had a hunch that marginal improvements might yield major results, if they could be spread across all of the industry’s usual snags—whether regulatory approaches, business models, the engineering of the systems themselves, or the challenge of actually constructing them.
In 2013, Cochran and DeWitte began to rent out the spare room in their Cambridge home on Airbnb. Their first guests were a pair of teachers from Alaska. The remote communities they taught in were dependent on diesel fuel for electricity, brought in at enormous cost. That energy scarcity created an opportunity: in such an environment, even a very expensive nuclear reactor might still be cheaper than the current system. The duo targeted a price of $100 per megawatt hour, more than double typical energy costs. They imagined using this high-cost early market as a pathway to scale their manufacturing. They realized that to make it work economically, they wouldn’t have to reinvent the reactor technology, only the production and sales processes. They decided to own their reactors and supply electricity, rather than supply the reactors themselves—operating more like today’s solar or wind developers. “It’s less about the technology being different,” says DeWitte, “than it is about approaching the entire process differently.”
That maverick streak raised eyebrows among nuclear veterans—and cash from Silicon Valley venture capitalists, including a boost from Y Combinator, where companies like Airbnb and Instacart got their start. In the eight years since, Oklo has distinguished itself from the competition by thinking smaller and moving faster. There are others competing in this space: NuScale, based in Oregon, is working to commercialize a reactor similar in design to existing nuclear plants, but constructed in 60-megawatt modules. TerraPower, founded by Bill Gates in 2006, has plans for a novel technology that uses its heat for energy storage, rather than to spin a turbine, which makes it an even more flexible option for electric grids that increasingly need that pliability. And X-energy, a Maryland-based firm that has received substantial funding from the U.S. Department of Energy, is developing 80-megawatt reactors that can also be grouped into “four-packs,” bringing them closer in size to today’s plants. Yet all are still years—and a billion dollars—away from their first installations. Oklo brags that its NRC application is 20 times shorter than NuScale’s, and its proposal cost 100 times less to develop. (Oklo’s proposed reactor would produce one-fortieth the power of NuScale’s.) NRC accepted Oklo’s application for review in March 2020, and regulations guarantee that process will be complete within three years. Oklo plans to power on around 2023, at a site at the Idaho National Laboratory, one of the U.S.’s oldest nuclear-research sites, and so already approved for such efforts. Then comes the hard part: doing it again and again, booking enough orders to justify building a factory to make many more reactors, driving costs down, and hoping politicians and activists worry more about the menace of greenhouse gases than the hazards of splitting atoms.
Nuclear-industry veterans remain wary. They have seen this all before. Westinghouse’s AP1000 reactor, first approved by the NRC in 2005, was touted as the flagship technology of Obama’s nuclear renaissance. It promised to be safer and simpler, using gravity rather than electricity-driven pumps to cool the reactor in case of an emergency—in theory, this would mitigate the danger of power outages, like the one that led to the Fukushima disaster. Its components could be constructed at a centralized location, and then shipped in giant pieces for assembly.
But all that was easier said than done. Westinghouse and its contractors struggled to manufacture the components according to nuclear’s mega-exacting requirements and in the end, only one AP1000 project in the U.S. actually happened: the Vogtle Electric Generating Plant in Georgia. Approved in 2012, its two reactors were expected at the time to cost $14 billion and be completed in 2016 and 2017, but costs have ballooned to $25 billion. The first will open, finally, next year.
Oklo and its competitors insist things are different this time, but they have yet to prove it. “Because we haven’t built one of them yet, we can promise that they’re not going to be a problem to build,” quips Gregory Jaczko, a former NRC chair who has since become the technology’s most biting critic. “So there’s no evidence of our failure.”
The Challenge
The cooling tower of the Hope Creek nuclear plant rises 50 stories above Artificial Island, New Jersey, built up on the marshy edge of the Delaware River. The three reactors here—one belonging to Hope Creek, and two run by the Salem Generating Station, which shares the site—generate an astonishing 3,465 megawatts of electricity, or roughly 40% of New Jersey’s total supply. Construction began in 1968, and was completed in 1986. Their closest human neighbors are across the river in Delaware. Otherwise the plant is surrounded by protected marshlands, pocked with radiation sensors and the occasional guard booth. Of the 1,500 people working here, around 100 are licensed reactor operators—a special designation given by the NRC, and held by fewer than 4,000 people in the country.
Among the newest in their ranks is Judy Rodriguez, an Elizabeth, N.J., native and another MIT grad. “Do I have your permission to enter?” she asks the operator on duty in the control room for the Salem Two reactor, which came online in 1981 and is capable of generating 1,200 megawatts of power. The operator opens a retractable belt barrier, like at an airport, and we step across a thick red line in the carpet. A horseshoe-shaped gray cabinet holds hundreds of buttons, glowing indicators and blinking lights, but a red LED counter at the center of the wall shows the most important number in the room: 944 megawatts, the amount of power the Salem Two reactor was generating that afternoon in September. Beside it is a circular pattern of square indicator lights showing the uranium fuel assemblies inside the core, deep inside the concrete domed containment building a couple hundred yards away. Salem Two has 764 of these constructions; each is about 6 inches sq and 15 ft. tall. They contain the source of the reactor’s energy, which are among the most guarded and controlled materials on earth. To make sure no one working there forgets that fact, a phrase is painted on walls all around the plant: “Line of Sight to the Reactor.”
As the epitome of critical infrastructure, this station has been buffeted by the crises the U.S. has suffered in the past few decades. After 9/11, the three reactors here absorbed nearly $100 million in security upgrades. Everyone entering the plant passes through metal- and explosives detectors, and radiation detectors on the way out. Walking between the buildings entails crossing a concrete expanse beneath high bullet resistant enclosures (BREs). The plant has a guard corp that has more members than any in New Jersey besides the state police, and federal NRC rules mean that they don’t have to abide by state limitations on automatic weapons.
The scale and complexity of the operation is staggering—and expensive. ”The place you’re sitting at right now costs us about $1.5 million to $2 million a day to run,” says Ralph Izzo, president and CEO of PSEG, New Jersey’s public utility company, which owns and operates the plants. “If those plants aren’t getting that in market, that’s a rough pill to swallow.” In 2019, the New Jersey Board of Public Utilities agreed to $300 million in annual subsidies to keep the three reactors running. The justification is simple: if the state wants to meet its carbon-reduction goals, keeping the plants online is essential, given that they supply 90% of the state’s zero-carbon energy. In September, the Illinois legislature came to the same conclusion as New Jersey, approving almost $700 million over five years to keep two existing nuclear plants open. The bipartisan infrastructure bill includes $6 billion in additional support (along with nearly $10 billion for development of future reactors). Even more is expected in the broader Build Back Better bill.
These subsidies—framed in both states as “carbon mitigation credits”—acknowledge the reality that nuclear plants cannot, on their own terms, compete economically with natural gas or coal. “There has always been a perception of this technology that never was matched by reality,” says Jaczko. The subsidies also show how climate change has altered the equation, but not decisively enough to guarantee nuclear’s future. Lawmakers and energy companies are coming to terms with nuclear’s new identity as clean power, deserving of the same economic incentives as solar and wind. Operators of existing plants want to be compensated for producing enormous amounts of carbon free energy, according to Josh Freed, of Third Way, a Washington, D.C., think tank that champions nuclear power as a climate solution. “There’s an inherent benefit to providing that, and it should be paid for.” For the moment, that has brought some assurance to U.S. nuclear operators of their future prospects. “A megawatt of zero-carbon electricity that’s leaving the grid is no different from a new megawatt of zero carbon electricity coming onto the grid,” says Kathleen Barrón, senior vice president of government and regulatory affairs and public policy at Exelon, the nation’s largest operator of nuclear reactors.
Globally, nations are struggling with the same equation. Germany and Japan both shuttered many of their plants after the Fukushima disaster, and saw their progress at reducing carbon emissions suffer. Germany has not built new renewables fast enough to meet its electricity needs, and has made up the gap with dirty coal and natural gas imported from Russia. Japan, under international pressure to move more aggressively to meet its carbon targets, announced in October that it would work to restart its reactors. “Nuclear power is indispensable when we think about how we can ensure a stable and affordable electricity supply while addressing climate change,” said Koichi Hagiuda, Japan’s minister of economy, trade and industry, at an October news conference. China is building more new nuclear reactors than any other country, with plans for as many as 150 by the 2030s, at an estimated cost of nearly half a trillion dollars. Long before that, in this decade, China will overtake the U.S. as the operator of the world’s largest nuclear-energy system.
The future won’t be decided by choosing between nuclear or solar power. Rather, it’s a technically and economically complicated balance of adding as much renewable energy as possible while ensuring a steady supply of electricity. At the moment, that’s easy. “There is enough opportunity to build renewables before achieving penetration levels that we’re worried about the grid having stability,” says PSEG’s Izzo. New Jersey, for its part, is aiming to add 7,500 megawatts of offshore wind by 2035—or about the equivalent of six new Salem-sized reactors. The technology to do that is readily at hand—Kansas alone has about that much wind power installed already.
The challenge comes when renewables make up a greater proportion of the electricity supply—or when the wind stops blowing. The need for “firm” generation becomes more crucial. “You cannot run our grid solely on the basis of renewable supply,” says Izzo. “One needs an interseasonal storage solution, and no one has come up with an economic interseasonal storage solution.”
Existing nuclear’s best pitch—aside from the very fact it exists already—is its “capacity factor,” the industry term for how often a plant meets its full energy making potential. For decades, nuclear plants struggled with outages and long maintenance periods. Today, improvements in management and technology make them more likely to run continuously—or “breaker to breaker”—between planned refuelings, which usually occur every 18 months, and take about a month. At Salem and Hope Creek, PSEG hangs banners in the hallways to celebrate each new record run without a maintenance breakdown. That improvement stretches across the industry. “If you took our performance back in the mid-’70s, and then look at our performance today, it’s equivalent to having built 30 new reactors,” says Maria Korsnick, president and CEO of the Nuclear Energy Institute, the industry’s main lobbying organization. That improved reliability has become its major calling card today.
Over the next 20 years, nuclear plants will need to develop new tricks. “One of the new words in our vocabulary is flexibility,” says Marilyn Kray, vice president of nuclear strategy and development at Exelon, which operates 21 reactors. “Flexibility not only in the existing plants, but in the designs of the emerging ones, to make them even more flexible and adaptable to complement renewables.” Smaller plants can adapt more easily to the grid, but they can also serve new customers, like providing energy directly to factories, steel mills or desalination plants.
Bringing those small plants into operation could be worth it, but it won’t be easy.”You can’t just excuse away the thing that’s at the center of all of it, which is it’s just a hard technology to build,” says Jaczko, the former NRC chair. “It’s difficult to make these plants, it’s difficult to design them, it’s difficult to engineer them, it’s difficult to construct them. At some point, that’s got to be the obvious conclusion to this technology.”
But the equally obvious conclusion is we can no longer live without it. “The reality is, you have to really squint to see how you get to net zero without nuclear,” says Third Way’s Freed. “There’s a lot of wishful thinking, a lot of fingers crossed.”
You may like
Business
Adapting compliance in a fragmented regulatory world

Rasha Abdel Jalil, Director of Financial Crime & Compliance at Eastnets, discusses the operational and strategic shifts needed to stay ahead of regulatory compliance in 2025 and beyond.
As we move through 2025, financial institutions face an unprecedented wave of regulatory change. From the EU’s Digital Operational Resilience Act (DORA) to the UK’s Basel 3.1 rollout and upcoming PSD3, the volume and velocity of new requirements are constantly reshaping how banks operate.
But it’s not just the sheer number of regulations that’s creating pressure. It’s the fragmentation and unpredictability. Jurisdictions are moving at different speeds, with overlapping deadlines and shifting expectations. Regulators are tightening controls, accelerating timelines and increasing penalties for non-compliance. And for financial compliance teams, it means navigating a landscape where the goalposts are constantly shifting.
Financial institutions must now strike a delicate balance: staying agile enough to respond to rapid regulatory shifts, while making sure their compliance frameworks are robust, scalable and future-ready.
The new regulatory compliance reality
By October of this year, financial institutions will have to navigate a dense cluster of regulatory compliance deadlines, each with its own scope, jurisdictional nuance and operational impact. From updated Common Reporting Standard (CRS) obligations, which applies to over 100 countries around the world, to Australia’s new Prudential Standard (CPS) 230 on operational risk, the scope of change is both global and granular.
Layered on top are sweeping EU regulations like the AI Act and the Instant Payments Regulation, the latter coming into force in October. These frameworks introduce new rules and redefine how institutions must manage data, risk and operational resilience, forcing financial compliance teams to juggle multiple reporting and governance requirements. A notable development is Verification of Payee (VOP), which adds a crucial layer of fraud protection for instant payments. This directly aligns with the regulator’s focus on instant payment security and compliance.
The result is a compliance environment that’s increasingly fragmented and unforgiving. In fact, 75% of compliance decision makers in Europe’s financial services sector agree that regulatory demands on their compliance teams have significantly increased over the past year. To put it simply, many are struggling to keep pace with regulatory change.
But why is it so difficult for teams to adapt?
The answer lies in a perfect storm of structural and operational challenges. In many organisations, compliance data is trapped in silos spread across departments, jurisdictions and legacy platforms. Traditional approaches – built around periodic reviews, static controls and manual processes – are no longer fit for purpose. Yet despite mounting pressure, many teams face internal resistance to changing established ways of working, which further slows progress and reinforces outdated models. Meanwhile, the pace of regulatory change continues to accelerate, customer expectations are rising and geopolitical uncertainty adds further complexity.
At the same time, institutions are facing a growing compliance talent gap. As regulatory expectations become more complex, the skills required to manage them are evolving. Yet many firms are struggling to find and retain professionals with the right mix of legal, technical and operational expertise. Experienced professionals are retiring en-masse, while nearly half of the new entrants lack the right experience needed to step into these roles effectively. And as AI tools become more central to investigative and decision-making processes, the need for technical fluency within compliance teams is growing faster than organisations can upskill. This shortage is leaving compliance teams overstretched, under-resourced and increasingly reliant on outdated tools and processes.
Therefore, in this changing environment, the question suddenly becomes how can institutions adapt?
Staying compliant in a shifting landscape
The pressure to adapt is real, but so is the opportunity. Institutions that reframe compliance as a proactive, technology-driven capability can build a more resilient and responsive foundation that’s now essential to staying ahead of regulatory change.
This begins with real-time visibility. As regulatory timelines change and expectations rise, institutions need systems that can surface compliance risks as they emerge, not weeks or months later. This means adopting tools that provide continuous monitoring, automated alerts and dynamic reporting.
But visibility alone isn’t enough. To act on insights effectively, institutions also need interoperability – the ability to unify data from across departments, jurisdictions and platforms. A modern compliance architecture must consolidate inputs from siloed systems into a unified case manager to support cross-regulatory reporting and governance. This not only improves accuracy and efficiency but also allows for faster, more coordinated responses to regulatory change.
To manage growing complexity at scale, many institutions are now turning to AI-powered compliance tools. Traditional rules-based systems often struggle to distinguish between suspicious and benign activity, leading to high false positive rates and operational inefficiencies. AI, by contrast, can learn from historical data to detect subtle anomalies, adapt to evolving fraud tactics and prioritise high-risk alerts with greater precision.
When layered with alert triage capabilities, AI can intelligently suppress low-value alerts and false positives, freeing up human investigators to focus on genuinely suspicious activity. At the more advanced stages, deep learning models can detect behavioural changes and suspicious network clusters, providing a multi-dimensional view of risk that static systems simply can’t match.
Of course, transparency and explainability in AI models are crucial. With regulations like the EU AI Act mandating interpretability in AI-driven decisions, institutions must make sure that every alert or action taken by an AI system is auditable and understandable. This includes clear justifications, visual tools such as link analysis, and detailed logs that support human oversight.
Alongside AI, automation continues to play a key role in modern compliance strategies. Automated sanction screening tools and watchlist screening, for example, help institutions maintain consistency and accuracy across jurisdictions, especially as global lists evolve in response to geopolitical events.
Similarly, customisable regulatory reporting tools, powered by automation, allow compliance teams to adapt to shifting requirements under various frameworks. One example is the upcoming enforcement of ISO 20022, which introduces a global standard for payment messaging. Its structured data format demands upgraded systems and more precise compliance screening, making automation and data interoperability more critical than ever.
This is particularly important in light of the ongoing talent shortages across the sector. With newer entrants still building the necessary expertise, automation and AI can help bridge the gap and allow teams to focus on complex tasks instead.
The future of compliance
As the regulatory compliance landscape becomes more fragmented, compliance can no longer be treated as a tick-box exercise. It must evolve into a dynamic, intelligence-led capability, one that allows institutions to respond to change, manage risk proactively and operate with confidence across jurisdictions.
To achieve this, institutions must rethink how compliance is structured, resourced and embedded into the fabric of financial operations. Those that do, and use the right tools in the process, will be better positioned to meet the demands of regulators today and in the future.
Business
Why Shorter SSL/TLS Certificate Lifespans Are the Perfect Wake-Up Call for CIOs

By Tim Callan, Chief Compliance Officer at Sectigo and Vice-Chair of the CA/Browser Forum
Let’s be honest: AI has been the headline act this year. It’s the rockstar of boardroom conversations and LinkedIn thought leadership. But while AI commands the spotlight, quantum computing is quietly tuning its instruments backstage. And when it steps forward, it won’t be playing backup. For CIOs, the smart move isn’t just watching the main stage — it’s preparing proactively for the moment quantum takes center stage and rewrites the rules of data protection.
Quantum computing is no longer a distant science project. NIST has already published standards for quantum-resistant algorithms and set a clear deadline: RSA and ECC, the cryptographic algorithms that protect today’s data, must be deprecated by 2030. We’re no longer talking about “forecasts;” we are talking about actual directives from government organizations to implement change. And yet, many organizations are still treating this like a future problem. The reality is that threat actors aren’t waiting. They’re collecting encrypted data now, knowing they’ll be able to decrypt it later. If we wait until quantum machines are commercially viable, we’ll be too late. The time to prepare is before the clock runs out and, unfortunately, that clock is already ticking.
For CIOs, this is an infrastructure and risk management crisis in the making. If your organization’s cryptographic infrastructure isn’t agile enough to adapt, the integrity of your digital operations and the trust they rely on could very soon be compromised.
The Quantum Threat Is Already Here
Quantum computing’s potential to disrupt global systems and the data that runs through it is not hypothetical. Attackers are already engaging in “Harvest Now, Decrypt Later” (HNDL) strategies, intercepting encrypted data today with the intent to decrypt it once quantum capabilities mature.
Recent research found that an alarming 60% of organizations are very or extremely concerned about HNDL attacks, and 59% express similar concern about “Trust Now, Forge Later” threats, where adversaries steal digitally signed documents to forge them in the future.
Despite this awareness, only 14% of organizations have conducted a full assessment of systems vulnerable to quantum attacks. Nearly half (43%) of organizations are still in a “wait and see” mode. For CIOs, this gap highlights the need for leadership: it’s not
enough to know the risks exist, you must identify which systems, applications, and data flows will still be sensitive in ten or twenty years and prioritize them for PQC migration.
Crypto Agility Is a Data Leadership Imperative
Crypto agility (the ability to rapidly identify, manage, and replace cryptographic assets) is now a core competency for IT leaders to ensure business continuity, compliance, and trust. The most immediate pressure point is SSL/TLS certificates. These certificates authenticate digital identities and secure communications across data pipelines, APIs, and partner integrations.
The CA/Browser Forum has mandated a phased reduction in certificate lifespans from 398 days today to just 47 days by 2029. The first milestone arrives in March 2026, when certificates must be renewed every six months, shrinking to near-monthly by 2029.
For CIOs, it’s not just an operational housekeeping issue. Every expired or mismanaged certificate is a potential data outage. That means application downtimes, broken integration, failed transactions and compliance violations. With less than 1 in 5 organizations prepared for monthly renewals, and only 5% fully automating their certificate management processes currently, most enterprises face serious continuity and trust risks.
The upside? Preparing for shortened certificate lifespans directly supports quantum readiness. Ninety percent of organizations recognize the overlap between certificate agility and post-quantum cryptography preparedness. By investing in automation now, CIOs can ensure uninterrupted operations today while laying a scalable foundation for future-proof cryptographic governance.
The Strategic Imperative of PQC Migration
Migrating to quantum-safe algorithms is not a plug-and-play upgrade. It’s a full-scale transformation. Ninety-eight percent of organizations expect challenges, with top barriers including system complexity, lack of expertise, and cross-team coordination. Legacy systems (many with hardcoded cryptographic functions) make this even harder.
That’s why establishing a Center of Cryptographic Excellence (CryptoCOE) is a critical first step. A CryptoCOE centralizes governance, aligns stakeholders, and drives execution. According to Gartner, by 2028 organizations with a CryptoCOE will save 50% of costs in their PQC transition compared to those without.
For CIOs, this is a natural extension of your role. Cryptography touches every layer of enterprise infrastructure. A CryptoCOE ensures that cryptographic decisions are made with full visibility into system dependencies, risk profiles and regulatory obligations.
By championing crypto agility as an infrastructure priority, CIOs can transform PQC migration from a technical project into a strategic initiative that protects the organization’s most critical assets.
The Road Ahead
The shift to 47-day certificates is a wake-up call. It marks the end of static cryptography and the beginning of a dynamic, agile era. Organizations that embrace this change will not only avoid outages and compliance failures, but they’ll be also prepared for the quantum future.
Crypto agility is both a technical capability and a leadership mandate. For CIOs, the path forward to quantum-resistant infrastructure can be clear: invest in automation, build cross-functional alignment, and treat cryptographic governance as a core pillar of enterprise resilience.
Business
The Security Talent Gap is a Red Herring: It’s Really an Automation and Context Gap

by Tom Gol, Senior Product Manager Armis
We constantly hear about a cybersecurity staffing crisis, but perhaps the real challenge isn’t a lack of people. It might just be a critical shortage of intelligent automation and actionable context for the talented teams we already have.
The Lingering Shadow of the “Talent Gap” Narrative
It’s almost a mantra in cybersecurity circles: “There’s a massive talent gap!” Conferences echo it, reports reinforce it, and CISOs often feel it acutely. This widely accepted idea suggests we simply don’t have enough skilled professionals, leading to overworked teams, burnout, and, most critically, persistent organizational risk. The default response often becomes a relentless cycle of “buy more tools, tune more tools, and staff more teams”—a cycle that feels increasingly unsustainable and inefficient.
But what if this pervasive “talent gap” is actually a clever red herring, distracting us from a more fundamental issue? We’ve grown so accustomed to the narrative of a human deficit that we often overlook a crucial truth: current technology is already capable of significantly narrowing this very gap. My strong conviction is this: the true underlying problem isn’t a shortage of available talent, but a profound and crippling gap in intelligent automation and actionable context that prevents our existing cybersecurity professionals from operating at their full potential. What’s more, advancing on the technology side now presents a demonstrably better return on investment than simply trying to out-hire the problem. Fill that gap with smarter tech, and watch the perceived talent shortage shrink.
Misdiagnosis: When More People Isn’t the Answer
For too long, the cybersecurity industry’s knee-jerk reaction to mounting threats has been to throw more human resources at the problem. Yet, the attack surface continues its relentless expansion. Threat actors become more sophisticated. And our SOCs are constantly drowning in an unfiltered deluge of alerts. This creates an overwhelming workload that even the most seasoned experts find impossible to manage effectively, often resulting in burnout and, ironically, talent attrition rather than retention.
The issue isn’t that a lack of bright minds are joining the field. It’s that those brilliant minds often find themselves mired in monotonous, low-value tasks. They’re forced to operate in a thick fog of incomplete information, constantly sifting through noise. When security teams lack clarity on exactly what assets they own, how those assets connect, what their true business criticality is, and which threats are genuinely active, even the most experienced professional struggles. Their effectiveness diminishes, not from a lack of inherent skill, but from a fundamental absence of visibility and intelligent support.
Automation and AI: The True Force Multiplier for Human Talent
The real power move against the overwhelming tide of cyber threats lies not in endless recruitment, but in the intelligent application of automation and AI. Leading industry discussions increasingly highlight that the purpose of AI in cybersecurity isn’t about wholesale human replacement. Instead, it’s about augmenting our existing staff, turning them into a far more potent force. This approach fundamentally allows organizations to scale their expertise and impact without being shackled to proportional headcount increases. Let’s unpack how this transformation plays out.
Freeing Up Human Capital from the Mundane
Imagine a security analyst whose day is consumed by hours of manual investigation, enriching alerts, triaging false positives, responding to routine questionnaires, or laboriously transitioning tickets. These are precisely the kinds of non-human, deterministic, and highly repetitive tasks ripe for intelligent automation. AI agents can seamlessly take on this soul-crushing burden, liberating human analysts. They are then free to pivot towards higher-value, creative, judgment-based, and genuinely strategic work. This transforms security teams from reactive task-runners into proactive problem-solvers. Projections suggest that common SOC tasks could become significantly more cost-efficient in the coming years due to automation—a shift that’s not merely about saving money, but about amplifying human potential.
Supercharging Productivity and Experience
Modern AI, particularly multi-agent AI and generative AI, can proactively offer smart advice on configurations, predict the root causes of complex issues, and integrate effortlessly with existing automated frameworks. This empowers security professionals, making their work not just more efficient but also more engaging and less prone to drudgery.
The Indispensable Power of Context: Lowering the “Expertise Bar”
While automation tackles the sheer volume of work, context provides the vital clarity that fundamentally reduces the need for constant, deep-seated expertise in every single scenario. When security professionals have immediate, rich, and actionable context about a vulnerability or an emerging threat, the path to intelligent prioritization and decisive action becomes remarkably clearer.
Consider the profound difference this context makes:
- Asset Context: Knowing not just that a vulnerability exists, but precisely which specific device it resides on—is it a critical production server, or an isolated, deprecated test machine?
- Business Application Context: Understanding the exact business function tied to that asset, and the tangible financial or operational impact if it were to be compromised.
- Network Context: Seeing the asset’s intricate network connections, its precise exposure level, and every potential path an attacker could take for lateral movement.
- Compensating Controls Context: Having a clear, real-time picture of which existing security controls (like network segmentation, EDRs, or Intrusion Prevention Systems) are actually in place and effectively working to mitigate the vulnerability’s risk.
- Threat Intelligence Context: Possessing real-time, “active exploit” intelligence that doesn’t just theorize, but tells you if a vulnerability is actively being exploited in the wild, or is part of a known attack campaign targeting your industry.
With this deep, multidimensional context, a significant portion of the exposure management workload can be automated. Crucially, for the tasks that still require human intervention, the “expertise bar” is dramatically lowered. My take is that for a vast majority of cases—perhaps 90% of scenarios—a security professional who isn’t a battle-hardened, 20-year veteran can still make incredibly effective decisions and significantly improve an organization’s cyber posture. This is because they are presented with clear, actionable context that naturally guides prioritization and even recommends precise actions. The result? A drastic reduction in alert noise, faster detection and response times, and a palpable easing of the burden on the entire security team.
Navigating the Human Element: Skills Evolution and Burnout
This powerful shift towards automation and AI naturally brings legitimate questions about skills erosion. Some experts prudently point out a valid risk: a significant portion of SOC teams might experience a regression in foundational analysis skills due to an over-reliance on automation. This underscores a critical truth: we must keep humans firmly in the loop. For highly autonomous SOCs, a “human-on-the-loop” approach is recommended, reserving human intervention for complex edge cases and critical exceptions.
CISOs, therefore, face an evolving mandate:
- Future-Proofing Skills: It’s less about filling historical roles and more about nurturing new competencies like prompt engineering, sophisticated AI oversight, advanced critical thinking, and strategic problem-solving.
- Combating Burnout: Beyond just tools, effective talent retention demands proactive measures to address burnout. This includes intelligent workload monitoring, smart task delegation, and genuine wellness initiatives. The ultimate goal isn’t just to fill empty seats; it’s to ensure that the people in those seats are effective, sustainable, and thriving.
A New Mindset for CISOs: Embracing the “Chief Innovation Security Officer” Role
The ongoing “talent gap” discussion should be a catalyst for CISOs to adopt a fundamentally new mindset. Instead of simply focusing on cost-cutting or the perpetual struggle of recruitment, they must evolve into “Chief Innovation Security Officers.” This means daring to rethink how work gets done, leveraging AI and automation not merely as tactical tools but as strategic enablers for scaling cybersecurity capabilities and unlocking the full potential of their existing talent. This strategic investment in technology, driven by an understanding of context, offers a superior ROI in bridging the cybersecurity “gap” compared to the increasingly futile effort to simply hire more people.
Building robust AI governance frameworks and achieving crystal-clear visibility into existing AI implementations and technical debt are crucial foundational steps. Ultimately, solving the perceived talent gap isn’t about endlessly hiring more people into an unsustainable system. It’s about empowering the talented individuals we do have—making them more efficient, more effective, and more strategically focused—through the intelligent application of automation and unparalleled context. It’s time to stop chasing a phantom gap and start truly empowering our digital defenders.

Adapting compliance in a fragmented regulatory world

Why Shorter SSL/TLS Certificate Lifespans Are the Perfect Wake-Up Call for CIOs

The Security Talent Gap is a Red Herring: It’s Really an Automation and Context Gap

How 5G and AI are shaping the future of eHealth

Combating Cyber Fraud in the Aviation Industry
