Connect with us

Business

A New Generation of Nuclear Reactors Could Hold the Key to a Green Future

Source: Time

On a conference-room whiteboard in the heart of Silicon Valley, Jacob DeWitte sketches his startup’s first product. In red marker, it looks like a beer can in a Koozie, stuck with a crazy straw. In real life, it will be about the size of a hot tub, and made from an array of exotic materials, like zirconium and uranium. Under carefully controlled conditions, they will interact to produce heat, which in turn will make electricity—1.5 megawatts’ worth, enough to power a neighborhood or a factory. DeWitte’s little power plant will run for a decade without refueling and, amazingly, will emit no carbon. ”It’s a metallic thermal battery,” he says, coyly. But more often DeWitte calls it by another name: a nuclear reactor.

Fission isn’t for the faint of heart. Building a working reactor—even a very small one—requires precise and painstaking efforts of both engineering and paper pushing. Regulations are understandably exhaustive. Fuel is hard to come by—they don’t sell uranium at the Gas-N-Sip. But DeWitte plans to flip the switch on his first reactor around 2023, a mere decade after co-founding his company, Oklo. After that, they want to do for neighborhood nukes what Tesla has done for electric cars: use a niche and expensive first version as a stepping stone toward cheaper, bigger, higher-volume products. In Oklo’s case, that means starting with a “microreactor” designed for remote communities, like Alaskan villages, currently dependent on diesel fuel trucked, barged or even flown in, at an exorbitant expense. Then building more and incrementally larger reactors until their zero-carbon energy source might meaningfully contribute to the global effort to reduce fossil-fuel emissions.

At global climate summits, in the corridors of Congress and at statehouses around the U.S., nuclear power has become the contentious keystone of carbon reduction plans. Everyone knows they need it. But no one is really sure they want it, given its history of accidents. Or even if they can get it in time to reach urgent climate goals, given how long it takes to build. Oklo is one of a growing handful of companies working to solve those problems by putting reactors inside safer, easier-to-build and smaller packages. None of them are quite ready to scale to market-level production, but given the investments being made into the technology right now, along with an increasing realization that we won’t be able to shift away from fossil fuels without nuclear power, it’s a good bet that at least one of them becomes a game changer.

If existing plants are the energy equivalent of a 2-liter soda bottle, with giant, 1,000-megawatt-plus reactors, Oklo’s strategy is to make reactors by the can. The per-megawatt construction costs might be higher, at least at first. But producing units in a factory would give the company a chance to improve its processes and to lower costs. Oklo would pioneer a new model. Nuclear plants need no longer be bet-the-company big, even for giant utilities. Venture capitalists can get behind the potential to scale to a global market. And climate hawks should fawn over a zero-carbon energy option that complements burgeoning supplies of wind and solar power. Unlike today’s plants, which run most efficiently at full blast, making it challenging for them to adapt to a grid increasingly powered by variable sources (not every day is sunny, or windy), the next generation of nuclear technology wants to be more flexible, able to respond quickly to ups and downs in supply and demand.

Engineering these innovations is hard. Oklo’s 30 employees are busy untangling the knots of safety and complexity that sent the cost of building nuclear plants to the stratosphere and all but halted their construction in the U.S. ”If this technology was brand-‘new’—like if fission was a recent breakthrough out of a lab, 10 or 15 years ago—we’d be talking about building our 30th reactor,” DeWitte says.

But fission is an old, and fraught, technology, and utility companies are scrambling now to keep their existing gargantuan nuclear plants open. Economically, they struggle to compete with cheap natural gas, along with wind and solar, often subsidized by governments. Yet climate-focused nations like France and the U.K. that had planned to phase out nuclear are instead doubling down. (In October, French President Emmanuel Macron backed off plans to close 14 reactors, and in November, he announced the country would instead start building new ones.) At the U.N. climate summit in Glasgow, the U.S. announced its support for Poland, Kenya, Ukraine, Brazil, Romania and Indonesia to develop their own new nuclear plants—while European negotiators assured that nuclear energy counts as “green.” All the while, Democrats and Republicans are (to everyone’s surprise) often aligned on nuclear’s benefits—and, in many cases, putting their powers of the purse behind it, both to keep old plants open in the U.S. and speed up new technologies domestically and overseas.

It makes for a decidedly odd moment in the life of a technology that already altered the course of one century, and now wants to make a difference in another. There are 93 operating nuclear reactors in the U.S.; combined, they supply 20% of U.S. electricity, and 50% of its carbon-free electricity. Nuclear should be a climate solution, satisfying both technical and economic needs. But while the existing plants finally operate with enviable efficiency (after 40 years of working out the kinks), the next generation of designs is still a decade away from being more than a niche player in our energy supply. Everyone wants a steady supply of electricity, without relying on coal. Nuclear is paradoxically right at hand, and out of reach.

For that to change, “new nuclear” has to emerge before the old nuclear plants recede. It has to keep pace with technological improvements in other realms, like long-term energy storage, where each incremental improvement increases the potential for renewables to supply more of our electricity. It has to be cheaper than carbon-capture technologies, which would allow flexible gas plants to operate without climate impacts (but are still too expensive to build at scale). And finally it has to arrive before we give up—before the spectre of climate catastrophe creates a collective “doomerism,” and we stop trying to change.

Not everyone thinks nuclear can reinvent itself in time. “When it comes to averting the imminent effects of climate change, even the cutting edge of nuclear technology will prove to be too little, too late,” predicts Allison Macfarlane, former chair of the U.S. Nuclear Regulatory Commission (NRC)—the government agency singularly responsible for permitting new plants. Can a stable, safe, known source of energy rise to the occasion, or will nuclear be cast aside as too expensive, too risky and too late?

Trying Again

Nuclear began in a rush. In 1942, in the lowest mire of World War II, the U.S. began the Manhattan Project, the vast effort to develop atomic weapons. It employed 130,000 people at secret sites across the country, the most famous of which was Los Alamos Laboratory, near Albuquerque, N.M., where Robert Oppenheimer led the design and construction of the first atomic bombs. DeWitte, 36, grew up nearby. Even as a child of the ’90s, he was steeped in the state’s nuclear history, and preoccupied with the terrifying success of its engineering and the power of its materials. “It’s so incredibly energy dense,” says DeWitte. “A golf ball of uranium would power your entire life!”

DeWitte has taken that bromide almost literally. He co-founded Oklo in 2013 with Caroline Cochran, while both were graduate students in nuclear engineering at the Massachusetts Institute of Technology. When they arrived in Cambridge, Mass., in 2007 and 2008, the nuclear industry was on a precipice. Then presidential candidate Barack Obama espoused a new eagerness to address climate change by reducing carbon emissions—which at the time meant less coal, and more nuclear. (Wind and solar energy were still a blip.) It was an easy sell. In competitive power markets, nuclear plants were profitable. The 104 operating reactors in the U.S. at the time were running smoothly. There hadn’t been a major accident since Chernobyl, in 1986.

The industry excitedly prepared for a “nuclear renaissance.” At the peak of interest, the NRC had applications for 30 new reactors in the U.S. Only two would be built. The cheap natural gas of the fracking boom began to drive down electricity prices, razing nuclear’s profits. Newly subsidized renewables, like wind and solar, added even more electricity generation, further saturating the markets. When on March 11, 2011, an earthquake and subsequent tsunami rolled over Japan’s Fukushima Daiichi nuclear power plant, leading to the meltdown of all three of its reactors and the evacuation of 154,000 people, the industry’s coffin was fully nailed. Not only would there be no renaissance in the U.S, but the existing plants had to justify their safety. Japan shut down 46 of its 50 operating reactors. Germany closed 11 of its 17. The U.S. fleet held on politically, but struggled to compete economically. Since Fukushima, 12 U.S. reactors have begun decommissioning, with three more planned.

At MIT, Cochran and DeWitte—who were teaching assistants together for a nuclear reactor class in 2009, and married in 2011—were frustrated by the setback. ”It was like, There’re all these cool technologies out there. Let’s do something with it,” says Cochran. But the nuclear industry has never been an easy place for innovators. In the U.S., its operational ranks have long been dominated by “ring knockers”—the officer corps of the Navy’s nuclear fleet, properly trained in the way things are done, but less interested in doing them differently. Governments had always kept a tight grip on nuclear; for decades, the technology was under shrouds. The personal computing revolution, and then the wild rise of the Internet, further drained engineering talent. From DeWitte and Cochran’s perspective, the nuclear-energy industry had already ossified by the time Fukushima and fracking totally brought things to a halt. “You eventually got to the point where it’s like, we have to try something different,” DeWitte says.

He and Cochran began to discreetly convene their MIT classmates for brainstorming sessions. Nuclear folks tend to be dogmatic about their favorite method of splitting atoms, but they stayed agnostic. “I didn’t start thinking we had to do everything differently,” says DeWitte. Rather, they had a hunch that marginal improvements might yield major results, if they could be spread across all of the industry’s usual snags—whether regulatory approaches, business models, the engineering of the systems themselves, or the challenge of actually constructing them.

In 2013, Cochran and DeWitte began to rent out the spare room in their Cambridge home on Airbnb. Their first guests were a pair of teachers from Alaska. The remote communities they taught in were dependent on diesel fuel for electricity, brought in at enormous cost. That energy scarcity created an opportunity: in such an environment, even a very expensive nuclear reactor might still be cheaper than the current system. The duo targeted a price of $100 per megawatt hour, more than double typical energy costs. They imagined using this high-cost early market as a pathway to scale their manufacturing. They realized that to make it work economically, they wouldn’t have to reinvent the reactor technology, only the production and sales processes. They decided to own their reactors and supply electricity, rather than supply the reactors themselves—operating more like today’s solar or wind developers. “It’s less about the technology being different,” says DeWitte, “than it is about approaching the entire process differently.”

That maverick streak raised eyebrows among nuclear veterans—and cash from Silicon Valley venture capitalists, including a boost from Y Combinator, where companies like Airbnb and Instacart got their start. In the eight years since, Oklo has distinguished itself from the competition by thinking smaller and moving faster. There are others competing in this space: NuScale, based in Oregon, is working to commercialize a reactor similar in design to existing nuclear plants, but constructed in 60-megawatt modules. TerraPower, founded by Bill Gates in 2006, has plans for a novel technology that uses its heat for energy storage, rather than to spin a turbine, which makes it an even more flexible option for electric grids that increasingly need that pliability. And X-energy, a Maryland-based firm that has received substantial funding from the U.S. Department of Energy, is developing 80-megawatt reactors that can also be grouped into “four-packs,” bringing them closer in size to today’s plants. Yet all are still years—and a billion dollars—away from their first installations. Oklo brags that its NRC application is 20 times shorter than NuScale’s, and its proposal cost 100 times less to develop. (Oklo’s proposed reactor would produce one-fortieth the power of NuScale’s.) NRC accepted Oklo’s application for review in March 2020, and regulations guarantee that process will be complete within three years. Oklo plans to power on around 2023, at a site at the Idaho National Laboratory, one of the U.S.’s oldest nuclear-research sites, and so already approved for such efforts. Then comes the hard part: doing it again and again, booking enough orders to justify building a factory to make many more reactors, driving costs down, and hoping politicians and activists worry more about the menace of greenhouse gases than the hazards of splitting atoms.

Nuclear-industry veterans remain wary. They have seen this all before. Westinghouse’s AP1000 reactor, first approved by the NRC in 2005, was touted as the flagship technology of Obama’s nuclear renaissance. It promised to be safer and simpler, using gravity rather than electricity-driven pumps to cool the reactor in case of an emergency—in theory, this would mitigate the danger of power outages, like the one that led to the Fukushima disaster. Its components could be constructed at a centralized location, and then shipped in giant pieces for assembly.

But all that was easier said than done. Westinghouse and its contractors struggled to manufacture the components according to nuclear’s mega-exacting requirements and in the end, only one AP1000 project in the U.S. actually happened: the Vogtle Electric Generating Plant in Georgia. Approved in 2012, its two reactors were expected at the time to cost $14 billion and be completed in 2016 and 2017, but costs have ballooned to $25 billion. The first will open, finally, next year.

Oklo and its competitors insist things are different this time, but they have yet to prove it. “Because we haven’t built one of them yet, we can promise that they’re not going to be a problem to build,” quips Gregory Jaczko, a former NRC chair who has since become the technology’s most biting critic. “So there’s no evidence of our failure.”

The Challenge

The cooling tower of the Hope Creek nuclear plant rises 50 stories above Artificial Island, New Jersey, built up on the marshy edge of the Delaware River. The three reactors here—one belonging to Hope Creek, and two run by the Salem Generating Station, which shares the site—generate an astonishing 3,465 megawatts of electricity, or roughly 40% of New Jersey’s total supply. Construction began in 1968, and was completed in 1986. Their closest human neighbors are across the river in Delaware. Otherwise the plant is surrounded by protected marshlands, pocked with radiation sensors and the occasional guard booth. Of the 1,500 people working here, around 100 are licensed reactor operators—a special designation given by the NRC, and held by fewer than 4,000 people in the country.

Among the newest in their ranks is Judy Rodriguez, an Elizabeth, N.J., native and another MIT grad. “Do I have your permission to enter?” she asks the operator on duty in the control room for the Salem Two reactor, which came online in 1981 and is capable of generating 1,200 megawatts of power. The operator opens a retractable belt barrier, like at an airport, and we step across a thick red line in the carpet. A horseshoe-shaped gray cabinet holds hundreds of buttons, glowing indicators and blinking lights, but a red LED counter at the center of the wall shows the most important number in the room: 944 megawatts, the amount of power the Salem Two reactor was generating that afternoon in September. Beside it is a circular pattern of square indicator lights showing the uranium fuel assemblies inside the core, deep inside the concrete domed containment building a couple hundred yards away. Salem Two has 764 of these constructions; each is about 6 inches sq and 15 ft. tall. They contain the source of the reactor’s energy, which are among the most guarded and controlled materials on earth. To make sure no one working there forgets that fact, a phrase is painted on walls all around the plant: “Line of Sight to the Reactor.”

As the epitome of critical infrastructure, this station has been buffeted by the crises the U.S. has suffered in the past few decades. After 9/11, the three reactors here absorbed nearly $100 million in security upgrades. Everyone entering the plant passes through metal- and explosives detectors, and radiation detectors on the way out. Walking between the buildings entails crossing a concrete expanse beneath high bullet resistant enclosures (BREs). The plant has a guard corp that has more members than any in New Jersey besides the state police, and federal NRC rules mean that they don’t have to abide by state limitations on automatic weapons.

The scale and complexity of the operation is staggering—and expensive. ”The place you’re sitting at right now costs us about $1.5 million to $2 million a day to run,” says Ralph Izzo, president and CEO of PSEG, New Jersey’s public utility company, which owns and operates the plants. “If those plants aren’t getting that in market, that’s a rough pill to swallow.” In 2019, the New Jersey Board of Public Utilities agreed to $300 million in annual subsidies to keep the three reactors running. The justification is simple: if the state wants to meet its carbon-reduction goals, keeping the plants online is essential, given that they supply 90% of the state’s zero-carbon energy. In September, the Illinois legislature came to the same conclusion as New Jersey, approving almost $700 million over five years to keep two existing nuclear plants open. The bipartisan infrastructure bill includes $6 billion in additional support (along with nearly $10 billion for development of future reactors). Even more is expected in the broader Build Back Better bill.

These subsidies—framed in both states as “carbon mitigation credits”—acknowledge the reality that nuclear plants cannot, on their own terms, compete economically with natural gas or coal. “There has always been a perception of this technology that never was matched by reality,” says Jaczko. The subsidies also show how climate change has altered the equation, but not decisively enough to guarantee nuclear’s future. Lawmakers and energy companies are coming to terms with nuclear’s new identity as clean power, deserving of the same economic incentives as solar and wind. Operators of existing plants want to be compensated for producing enormous amounts of carbon free energy, according to Josh Freed, of Third Way, a Washington, D.C., think tank that champions nuclear power as a climate solution. “There’s an inherent benefit to providing that, and it should be paid for.” For the moment, that has brought some assurance to U.S. nuclear operators of their future prospects. “A megawatt of zero-carbon electricity that’s leaving the grid is no different from a new megawatt of zero carbon electricity coming onto the grid,” says Kathleen Barrón, senior vice president of government and regulatory affairs and public policy at Exelon, the nation’s largest operator of nuclear reactors.

Globally, nations are struggling with the same equation. Germany and Japan both shuttered many of their plants after the Fukushima disaster, and saw their progress at reducing carbon emissions suffer. Germany has not built new renewables fast enough to meet its electricity needs, and has made up the gap with dirty coal and natural gas imported from Russia. Japan, under international pressure to move more aggressively to meet its carbon targets, announced in October that it would work to restart its reactors. “Nuclear power is indispensable when we think about how we can ensure a stable and affordable electricity supply while addressing climate change,” said Koichi Hagiuda, Japan’s minister of economy, trade and industry, at an October news conference. China is building more new nuclear reactors than any other country, with plans for as many as 150 by the 2030s, at an estimated cost of nearly half a trillion dollars. Long before that, in this decade, China will overtake the U.S. as the operator of the world’s largest nuclear-energy system.

The future won’t be decided by choosing between nuclear or solar power. Rather, it’s a technically and economically complicated balance of adding as much renewable energy as possible while ensuring a steady supply of electricity. At the moment, that’s easy. “There is enough opportunity to build renewables before achieving penetration levels that we’re worried about the grid having stability,” says PSEG’s Izzo. New Jersey, for its part, is aiming to add 7,500 megawatts of offshore wind by 2035—or about the equivalent of six new Salem-sized reactors. The technology to do that is readily at hand—Kansas alone has about that much wind power installed already.

The challenge comes when renewables make up a greater proportion of the electricity supply—or when the wind stops blowing. The need for “firm” generation becomes more crucial. “You cannot run our grid solely on the basis of renewable supply,” says Izzo. “One needs an interseasonal storage solution, and no one has come up with an economic interseasonal storage solution.”

Existing nuclear’s best pitch—aside from the very fact it exists already—is its “capacity factor,” the industry term for how often a plant meets its full energy making potential. For decades, nuclear plants struggled with outages and long maintenance periods. Today, improvements in management and technology make them more likely to run continuously—or “breaker to breaker”—between planned refuelings, which usually occur every 18 months, and take about a month. At Salem and Hope Creek, PSEG hangs banners in the hallways to celebrate each new record run without a maintenance breakdown. That improvement stretches across the industry. “If you took our performance back in the mid-’70s, and then look at our performance today, it’s equivalent to having built 30 new reactors,” says Maria Korsnick, president and CEO of the Nuclear Energy Institute, the industry’s main lobbying organization. That improved reliability has become its major calling card today.

Over the next 20 years, nuclear plants will need to develop new tricks. “One of the new words in our vocabulary is flexibility,” says Marilyn Kray, vice president of nuclear strategy and development at Exelon, which operates 21 reactors. “Flexibility not only in the existing plants, but in the designs of the emerging ones, to make them even more flexible and adaptable to complement renewables.” Smaller plants can adapt more easily to the grid, but they can also serve new customers, like providing energy directly to factories, steel mills or desalination plants.

Bringing those small plants into operation could be worth it, but it won’t be easy.”You can’t just excuse away the thing that’s at the center of all of it, which is it’s just a hard technology to build,” says Jaczko, the former NRC chair. “It’s difficult to make these plants, it’s difficult to design them, it’s difficult to engineer them, it’s difficult to construct them. At some point, that’s got to be the obvious conclusion to this technology.”

But the equally obvious conclusion is we can no longer live without it. “The reality is, you have to really squint to see how you get to net zero without nuclear,” says Third Way’s Freed. “There’s a lot of wishful thinking, a lot of fingers crossed.”

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

What can the West learn from the Arabian Gulf’s payments revolution?

Hassan Zebdeh, Financial Crime Advisor at Eastnets

A decade ago, paying for coffee at a small café in Riyadh meant fumbling with cash – or, at best, handing over a plastic card. Today, locals casually wave smartphones over terminals, instantly settling the bill, splitting it among friends, and even transferring money abroad before their drink cools.

This seemingly trivial scene illustrates a profound truth: while the West debates incremental upgrades to ageing payment systems, the Arabian Gulf has leapfrogged straight into the future. As of late 2024, Saudi Arabia achieved a remarkable 98% adoption rate for contactless payments in face-to-face transactions, a significant leap from just 4% in 2017.

Align financial transformation with a bold national vision

One milestone that exemplifies the Gulf’s approach is Saudi Arabia’s launch of its first Swift Service Bureau. While not the first SSB worldwide, its presence in the Kingdom underscores a broader theme: rather than rely on piecemeal upgrades to older infrastructure, Saudi Arabia chose a proven yet modern route, aligned to Vision 2030, to unify international payment standards, enhance security, and reduce operational overhead.

And it matters, because in a region heavily reliant on expatriate workers whose steady stream of remittances powers whole economies. The stakes for frictionless cross-border transactions are unusually high. Rather than tinkering around the edges of an ageing system, Saudi Arabia opted for a bold and coherent solution, deliberately aligning national pride and purpose with practical financial innovation. It’s a reminder that infrastructure, at its best, doesn’t merely enable transactions; it reshapes how people imagine the future.

Make regulation a launchpad, not a bottleneck

Regulation often carries the reputation of an overprotective parent – necessary, perhaps, but tiresome, cautious to a fault, and prone to slowing progress rather than enabling it. It’s the bureaucratic equivalent of wrapping every new idea in bubble wrap and paperwork. Yet Bahrain has managed something rare: flipping the narrative entirely. Instead of acting solely as gatekeepers, Bahraini regulators decided to become collaborators. Their fintech sandbox isn’t merely a regulatory innovation; it’s psychological brilliance, transforming a potentially adversarial relationship into a partnership

Within this curated environment, fintech firms have launched practical experiments with striking results. Take Tarabut Gateway, which pioneered open banking APIs, reshaping how banks and customers interact. Rain, a cryptocurrency exchange, tested compliance frameworks safely, quickly becoming one of the Gulf’s trusted crypto players. Elsewhere, startups trialled AI-driven identity verification and seamless cross-border payments, all under the watchful yet adaptive guidance of Bahraini regulators. Successes were rapidly scaled; failures offered immediate lessons, free from damaging legal fallout. Bahrain proves regulation, thoughtfully applied, can genuinely empower innovation rather than restrict it.

Prioritise cross-border interoperability and unified standards

Cross-border payments have long been a maddening puzzle – expensive, sluggish, and unpredictably complicated. Most Western banks seem resigned to this reality, treating the spaghetti-like mess of correspondent banking relationships as a necessary evil. Yet Gulf states looked at this same complexity and saw not just inconvenience, but opportunity. Instead of battling against the tide, they cleverly redirected it, embracing standards like ISO 20022, which neatly streamline data exchange and slash friction from global transactions.

Examples abound: Saudi Arabia’s adoption of ISO 20022 through its Swift Service Bureau will notably accelerated cross-border transactions and improve transparency. The UAE and Saudi Arabia also jointly piloted Project Aber, a digital currency initiative that significantly reduced settlement times for interbank payments. Similarly, Bahrain’s collaboration with fintechs has simplified previously burdensome remittance processes, reducing both cost and complexity.

Target digital ecosystems for financial inclusion

One of the most intriguing elements of the Gulf’s payments transformation is the speed and enthusiasm with which consumers embraced new technologies. In Bahrain, mobile wallet payments surged by 196% in 2021, contributing to a nearly 50% year-over-year increase in digital payment volumes. Similarly, Saudi Arabia experienced a near tripling of mobile payment volumes in the same year, with mobile transactions accounting for 35% of all payments. 

The West, by contrast, still struggles with financial inclusion. In the U.S., millions remain unbanked or underbanked, held back by distrust, geographic isolation, and high fees. Digital solutions exist, but widespread adoption has lagged, partly because major institutions view inclusion as a long-term aspiration rather than an immediate priority. The Gulf shows that when digital tools are made integral to daily life, rather than optional extras, the barriers to financial inclusion quickly dissolve.

The road ahead

As the Gulf region continues to refine its payment systems experimenting with digital currencies, advanced data protection laws, and AI-driven compliance the ripple effects will be felt far beyond the GCC. Western players can treat these developments as an external threat or as a chance to rejuvenate their own approaches.

Ultimately, if you want a glimpse of where financial services may be headed towards integrated platforms, real-time international transactions, and widespread digital inclusion – the Gulf experience is a prime example of what’s possible. The question is whether other markets will step up, follow suit, and even surpass these achievements. With global financial landscapes evolving at record speed, hesitation carries its own risks. The Arabian Gulf has shown that bold bets can pay off; perhaps that’s the most enduring lesson for the West.

Continue Reading

Business

Unlocking business growth with efficient finance operations

Rob Israch, President at Tipalti

The UK economy has faced a turbulent couple of years, meaning now more than ever, businesses need to stay agile. With Reeves’s national insurance hikes now fully in play and global trade tensions casting a shadow over the landscape, the coming months will present a crucial opportunity for businesses to decide how to best move forward. 

That said, it’s not all doom and gloom. The latest official figures show that the UK’s economy unexpectedly grew at a rate of 0.5% in February – a welcome sign of resilience. But turning this momentum into sustainable growth will hinge on effective financial management – essential for long term success.

Although many are currently prioritising stability, sustainable growth is still within reach with the right approach. By making use of data and insights from the finance team, companies can pinpoint efficient paths to expansion. However, this relies on having real-time information at their fingertips to support agile, well-timed decisions.

While achieving growth may be tough to come by this year, businesses can stay on track by adopting a few essential strategies. 

Improving efficiency by eliminating finance bottlenecks

Growth is the ultimate goal for any business, but it must be managed carefully to ensure long-term sustainability. Uncertain times present an opportunity to eliminate inefficiencies and build a strong foundation for future success.

A significant bottleneck for many businesses is the finance function’s reliance on manual processes for invoice processing, reporting and reconciliation. These tasks are not only time-consuming but also introduce errors, delays and inefficiencies. As a result, finance teams become stretched thin. Our recent survey found that, on average, over half (51%) of accounts payable time is spent on manual tasks – severely limiting finance leaders’ ability to drive strategic growth.

Repetitive tasks such as data entry, reconciliation, and approvals require considerable time and effort, slowing down decision-making and increasing the risk of inaccuracies. Given the critical role that finance plays in guiding business strategy, these inefficiencies and errors create significant roadblocks to growth.  

The pressure on finance leaders is therefore immense and while 71% of UK business leaders believe CFOs should take a central role in corporate growth initiatives, they are simply lost in a sea of manual processes and number crunching. In fact, 82% of finance leaders admit that excessive manual finance processes are hindering their organisation’s growth plans for the year ahead. To remedy this, businesses must embrace automation.

Achieving sustainable growth with automation

By replacing manual spreadsheets with automated solutions, finance teams can eliminate administrative burdens and focus on strategic initiatives. Automation simplifies critical finance tasks like bank feeds, coding bookkeeping transactions and invoice matching. Beyond this, it can also help alleviate the strain of more complex and time-intensive responsibilities, including tax filings, invoices and payroll.

The benefits of automation extend far beyond time saving, to accuracy, improving business visibility and enabling real-time financial insights. With fewer errors and faster-data processing, finance leaders can shift their focus to high-value tasks like driving strategy, identifying risks and opportunities and determining the optimal timing for growth investments.

Attracting investors with operational efficiency 

Once businesses have minimised time spent on administrative tasks, they can focus on the bigger picture: growth and securing investment. With access to cheap capital becoming increasingly difficult, businesses must position themselves wisely to attract funding.  

Investors favour lean, efficient companies, so demonstrating that a business can achieve more with fewer resources signals a commitment to financial prudence and sustainability. By embracing automation, companies can showcase their ability to manage operations efficiently, instilling confidence that any new investment will be spent and used wisely.

Economic uncertainty provides an opportunity to reassess business foundations and create more agile operations. Refining workflows and eliminating bottlenecks not only improves performance but also strengthens investor confidence by demonstrating a long-term commitment to financial health.

Additionally, strong financial reporting and effective cash flow management are crucial to standing out to investors. Clear, real-time insights into financial health demonstrate resilience and highlight a business’ resilience and readiness for growth.

The growth journey ahead

Though the landscape remains tough for UK businesses, sustainable growth is still achievable with a clear and focused strategy. By empowering finance leaders to step into more strategic and high-level decision making roles, organisations can stay resilient and agile amid ongoing economic headwinds.

UK businesses have fought to stay afloat, so now is the time to rebuild strength. By embracing more strategic financial management to build resilience, they can set the stage for long-term, sustainable growth, whatever the economic climate brings.

Continue Reading

Business

The Consortium Conundrum: Debunking Modern Fraud Prevention Myths

By Husnain Bajwa, SVP of Product, Risk Solutions, SEON


As digital threats escalate, businesses are desperately seeking comprehensive solutions to counteract the growing complexity and sophistication of evolving fraud vectors. The latest industry trend – consortium data sharing – promises a revolutionary approach to fraud prevention, where organisations combine their data to strengthen fraud defences.

It’s understandable how the consortium data model presents an appealing narrative of collective intelligence: by pooling fraud insights across multiple organisations, businesses hope to create an omniscient network capable of instantaneously detecting and preventing fraudulent activities.

And this approach seems intuitive – more data should translate to better protection. However, the reality of data sharing is far more complex and fundamentally flawed. Overlooked hurdles reveal significant structural limitations that undermine the effectiveness of consortium strategies, preventing this approach from fulfilling its potential to safeguard against fraud. Here are several key misconceptions about how consortium approaches fail to deliver promised benefits.


Fallacy of Scale Without Quality


One of the most persistent myths in fraud prevention mirrors the trope of enhancing a low-resolution image to reveal more explicit details. There’s a pervasive belief that massive volumes of consortium data can reveal insights not present in any of the original signals. However, this represents a fundamental misunderstanding of information theory and data analysis.

To protect participant privacy, consortium approaches strip away critical information elements relevant to fraud detection. This includes precise identifiers, nuanced temporal sequences and essential contextual metadata. Through the loss of granular signal fidelity required to anonymise information to make data sharing viable, said processes skew data while eroding its quality and reliability. The result is a sanitised dataset that bears little resemblance to the rich, complex information needed for effective fraud prevention. Further, embedded reporting biases from different entities can likewise exacerbate quality issues. Knowing where data comes from is imperative, and consortium data frequently lacks freshness and provenance.

Competitive Distortion is a Problem


Competitive dynamics can impact the efficacy of shared data strategies. Businesses today operate in competitive environments marked by inherent conflicts, where companies have strategic reasons to restrict their information sharing. The selective reporting of fraud cases, intentional delays in sharing emerging fraud patterns and strategic obfuscation of crucial insights can lead to a “tragedy of the commons” situation, where individual organisational interests systematically degrade the potential of consortium information sharing for the collective benefit.

Moreover, when direct competitors share data, organisations often limit their contributions to non-sensitive fraud cases or withhold high-value signals that reduce the effectiveness of the consortium dynamics.

Anonymisation’s Hidden Costs


Consortiums are compelled to aggressively anonymise data to sidestep the legal and ethical concerns of operating akin to de facto credit reporting agencies. This anonymisation process encompasses removing precise identifiers, truncating temporal sequences, coarsening behavioural patterns, eliminating cross-entity relationships and reducing contextual signals. Such extensive modifications limit the data’s utility for fraud detection by obscuring the details necessary for identifying and analysing nuanced fraudulent activities.

These anonymisation efforts, needed to preserve privacy, also mean that vital contextual information is lost, significantly hampering the ability to detect fraud trends over time and diluting the effectiveness of such data. This overall reduction in data utility illustrates the profound trade-offs required to balance privacy concerns with effective fraud detection.

The Problem of Lost Provenance


In the critical frameworks of DIKA (Data, Information, Knowledge, Action) and OODA (Observe, Orient, Decide, Act), data provenance is essential for validating information quality, understanding contextual relevance, assessing temporal applicability, determining confidence levels and guiding action selection. However, once data provenance is lost through consortium sharing, it is irrecoverable, leading to a permanent degradation in decision quality.

This loss of provenance becomes even more critical at the moment of decision-making. Without the ability to verify the freshness of data, assess the reliability of its sources or understand the context in which it was collected, decision-makers are left with limited visibility into preprocessing steps and a reduced confidence in their signal interpretation. These constraints hinder the effectiveness of fraud detection efforts, as the underlying data lacks the necessary clarity for precise and timely decision-making.

The Realities of Fraud Detection Techniques


Modern fraud prevention hinges on well-established analytical techniques such as rule-based pattern matching, supervised classification, anomaly detection, network analysis and temporal sequence modelling. These methods underscore a critical principle in fraud detection: the signal quality far outweighs the data volume. High-quality, context-rich data enhances the effectiveness of these techniques, enabling more accurate and dynamic responses to potential fraud.

Despite the rapid advancements in machine learning (ML) and data science, the fundamental constraints of fraud detection remain unchanged. The effectiveness of advanced ML models is still heavily dependent on the quality of data, the intricacy of feature engineering, the interpretability of models and adherence to regulatory compliance and operational constraints. No degree of algorithmic sophistication can compensate for fundamental data limitations.

As a result, the core of effective fraud detection continues to rely more on the precision and context of data rather than sheer quantity. This reality shapes the strategic focus of fraud prevention efforts, prioritising data integrity and actionable insights over expansive but less actionable data sets.

Evolving Into Trust & Safety: The Imperative for High-Quality Data


As the scope of fraud prevention broadens into the more encompassing field of trust and safety, the requirements for effective management become more complex. New demands, such as end-to-end activity tracking, cross-domain risk assessment, behavioural pattern analysis, intent determination and impact evaluation, all rely heavily on the quality and provenance of data.

In trust and safety operations, maintaining clear audit trails, ensuring source verification, preserving data context, assessing actions’ impact, and justifying decisions become paramount.

However, the nature of consortium data, which is anonymised and decontextualised to protect privacy and meet regulatory standards, cannot fundamentally support clear audit trails, ensure source verification, preserve data context, and readily assess the impact of actions to justify decisions. These limitations showcase the critical need for organisations to develop their own rich, contextually detailed datasets that retain provenance and can be directly applied to operational needs to ensure that trust and safety measures are comprehensive, effectively targeted, and relevant.

Rethinking Data Strategies


While consortium data sharing offers a compelling vision, its execution is fraught with challenges that diminish its practical utility. Fundamental limitations such as data quality concerns, competitive dynamics, privacy requirements and the critical need for provenance preservation undermine the effectiveness of such collaborative efforts. Instead of relying on massive, shared datasets of uncertain quality, organisations should pivot toward cultivating their own high-quality internal datasets.

The future of effective fraud prevention lies not in the quantity of shared data but in the quality of proprietary, context-rich data with clear provenance and direct operational relevance. By building and maintaining high-quality datasets, organisations can create a more resilient and effective fraud prevention framework tailored to their specific operational needs and challenges.

Continue Reading

Copyright © 2021 Futures Parity.