Connect with us

Business

A New Generation of Nuclear Reactors Could Hold the Key to a Green Future

Source: Time

On a conference-room whiteboard in the heart of Silicon Valley, Jacob DeWitte sketches his startup’s first product. In red marker, it looks like a beer can in a Koozie, stuck with a crazy straw. In real life, it will be about the size of a hot tub, and made from an array of exotic materials, like zirconium and uranium. Under carefully controlled conditions, they will interact to produce heat, which in turn will make electricity—1.5 megawatts’ worth, enough to power a neighborhood or a factory. DeWitte’s little power plant will run for a decade without refueling and, amazingly, will emit no carbon. ”It’s a metallic thermal battery,” he says, coyly. But more often DeWitte calls it by another name: a nuclear reactor.

Fission isn’t for the faint of heart. Building a working reactor—even a very small one—requires precise and painstaking efforts of both engineering and paper pushing. Regulations are understandably exhaustive. Fuel is hard to come by—they don’t sell uranium at the Gas-N-Sip. But DeWitte plans to flip the switch on his first reactor around 2023, a mere decade after co-founding his company, Oklo. After that, they want to do for neighborhood nukes what Tesla has done for electric cars: use a niche and expensive first version as a stepping stone toward cheaper, bigger, higher-volume products. In Oklo’s case, that means starting with a “microreactor” designed for remote communities, like Alaskan villages, currently dependent on diesel fuel trucked, barged or even flown in, at an exorbitant expense. Then building more and incrementally larger reactors until their zero-carbon energy source might meaningfully contribute to the global effort to reduce fossil-fuel emissions.

At global climate summits, in the corridors of Congress and at statehouses around the U.S., nuclear power has become the contentious keystone of carbon reduction plans. Everyone knows they need it. But no one is really sure they want it, given its history of accidents. Or even if they can get it in time to reach urgent climate goals, given how long it takes to build. Oklo is one of a growing handful of companies working to solve those problems by putting reactors inside safer, easier-to-build and smaller packages. None of them are quite ready to scale to market-level production, but given the investments being made into the technology right now, along with an increasing realization that we won’t be able to shift away from fossil fuels without nuclear power, it’s a good bet that at least one of them becomes a game changer.

If existing plants are the energy equivalent of a 2-liter soda bottle, with giant, 1,000-megawatt-plus reactors, Oklo’s strategy is to make reactors by the can. The per-megawatt construction costs might be higher, at least at first. But producing units in a factory would give the company a chance to improve its processes and to lower costs. Oklo would pioneer a new model. Nuclear plants need no longer be bet-the-company big, even for giant utilities. Venture capitalists can get behind the potential to scale to a global market. And climate hawks should fawn over a zero-carbon energy option that complements burgeoning supplies of wind and solar power. Unlike today’s plants, which run most efficiently at full blast, making it challenging for them to adapt to a grid increasingly powered by variable sources (not every day is sunny, or windy), the next generation of nuclear technology wants to be more flexible, able to respond quickly to ups and downs in supply and demand.

Engineering these innovations is hard. Oklo’s 30 employees are busy untangling the knots of safety and complexity that sent the cost of building nuclear plants to the stratosphere and all but halted their construction in the U.S. ”If this technology was brand-‘new’—like if fission was a recent breakthrough out of a lab, 10 or 15 years ago—we’d be talking about building our 30th reactor,” DeWitte says.

But fission is an old, and fraught, technology, and utility companies are scrambling now to keep their existing gargantuan nuclear plants open. Economically, they struggle to compete with cheap natural gas, along with wind and solar, often subsidized by governments. Yet climate-focused nations like France and the U.K. that had planned to phase out nuclear are instead doubling down. (In October, French President Emmanuel Macron backed off plans to close 14 reactors, and in November, he announced the country would instead start building new ones.) At the U.N. climate summit in Glasgow, the U.S. announced its support for Poland, Kenya, Ukraine, Brazil, Romania and Indonesia to develop their own new nuclear plants—while European negotiators assured that nuclear energy counts as “green.” All the while, Democrats and Republicans are (to everyone’s surprise) often aligned on nuclear’s benefits—and, in many cases, putting their powers of the purse behind it, both to keep old plants open in the U.S. and speed up new technologies domestically and overseas.

It makes for a decidedly odd moment in the life of a technology that already altered the course of one century, and now wants to make a difference in another. There are 93 operating nuclear reactors in the U.S.; combined, they supply 20% of U.S. electricity, and 50% of its carbon-free electricity. Nuclear should be a climate solution, satisfying both technical and economic needs. But while the existing plants finally operate with enviable efficiency (after 40 years of working out the kinks), the next generation of designs is still a decade away from being more than a niche player in our energy supply. Everyone wants a steady supply of electricity, without relying on coal. Nuclear is paradoxically right at hand, and out of reach.

For that to change, “new nuclear” has to emerge before the old nuclear plants recede. It has to keep pace with technological improvements in other realms, like long-term energy storage, where each incremental improvement increases the potential for renewables to supply more of our electricity. It has to be cheaper than carbon-capture technologies, which would allow flexible gas plants to operate without climate impacts (but are still too expensive to build at scale). And finally it has to arrive before we give up—before the spectre of climate catastrophe creates a collective “doomerism,” and we stop trying to change.

Not everyone thinks nuclear can reinvent itself in time. “When it comes to averting the imminent effects of climate change, even the cutting edge of nuclear technology will prove to be too little, too late,” predicts Allison Macfarlane, former chair of the U.S. Nuclear Regulatory Commission (NRC)—the government agency singularly responsible for permitting new plants. Can a stable, safe, known source of energy rise to the occasion, or will nuclear be cast aside as too expensive, too risky and too late?

Trying Again

Nuclear began in a rush. In 1942, in the lowest mire of World War II, the U.S. began the Manhattan Project, the vast effort to develop atomic weapons. It employed 130,000 people at secret sites across the country, the most famous of which was Los Alamos Laboratory, near Albuquerque, N.M., where Robert Oppenheimer led the design and construction of the first atomic bombs. DeWitte, 36, grew up nearby. Even as a child of the ’90s, he was steeped in the state’s nuclear history, and preoccupied with the terrifying success of its engineering and the power of its materials. “It’s so incredibly energy dense,” says DeWitte. “A golf ball of uranium would power your entire life!”

DeWitte has taken that bromide almost literally. He co-founded Oklo in 2013 with Caroline Cochran, while both were graduate students in nuclear engineering at the Massachusetts Institute of Technology. When they arrived in Cambridge, Mass., in 2007 and 2008, the nuclear industry was on a precipice. Then presidential candidate Barack Obama espoused a new eagerness to address climate change by reducing carbon emissions—which at the time meant less coal, and more nuclear. (Wind and solar energy were still a blip.) It was an easy sell. In competitive power markets, nuclear plants were profitable. The 104 operating reactors in the U.S. at the time were running smoothly. There hadn’t been a major accident since Chernobyl, in 1986.

The industry excitedly prepared for a “nuclear renaissance.” At the peak of interest, the NRC had applications for 30 new reactors in the U.S. Only two would be built. The cheap natural gas of the fracking boom began to drive down electricity prices, razing nuclear’s profits. Newly subsidized renewables, like wind and solar, added even more electricity generation, further saturating the markets. When on March 11, 2011, an earthquake and subsequent tsunami rolled over Japan’s Fukushima Daiichi nuclear power plant, leading to the meltdown of all three of its reactors and the evacuation of 154,000 people, the industry’s coffin was fully nailed. Not only would there be no renaissance in the U.S, but the existing plants had to justify their safety. Japan shut down 46 of its 50 operating reactors. Germany closed 11 of its 17. The U.S. fleet held on politically, but struggled to compete economically. Since Fukushima, 12 U.S. reactors have begun decommissioning, with three more planned.

At MIT, Cochran and DeWitte—who were teaching assistants together for a nuclear reactor class in 2009, and married in 2011—were frustrated by the setback. ”It was like, There’re all these cool technologies out there. Let’s do something with it,” says Cochran. But the nuclear industry has never been an easy place for innovators. In the U.S., its operational ranks have long been dominated by “ring knockers”—the officer corps of the Navy’s nuclear fleet, properly trained in the way things are done, but less interested in doing them differently. Governments had always kept a tight grip on nuclear; for decades, the technology was under shrouds. The personal computing revolution, and then the wild rise of the Internet, further drained engineering talent. From DeWitte and Cochran’s perspective, the nuclear-energy industry had already ossified by the time Fukushima and fracking totally brought things to a halt. “You eventually got to the point where it’s like, we have to try something different,” DeWitte says.

He and Cochran began to discreetly convene their MIT classmates for brainstorming sessions. Nuclear folks tend to be dogmatic about their favorite method of splitting atoms, but they stayed agnostic. “I didn’t start thinking we had to do everything differently,” says DeWitte. Rather, they had a hunch that marginal improvements might yield major results, if they could be spread across all of the industry’s usual snags—whether regulatory approaches, business models, the engineering of the systems themselves, or the challenge of actually constructing them.

In 2013, Cochran and DeWitte began to rent out the spare room in their Cambridge home on Airbnb. Their first guests were a pair of teachers from Alaska. The remote communities they taught in were dependent on diesel fuel for electricity, brought in at enormous cost. That energy scarcity created an opportunity: in such an environment, even a very expensive nuclear reactor might still be cheaper than the current system. The duo targeted a price of $100 per megawatt hour, more than double typical energy costs. They imagined using this high-cost early market as a pathway to scale their manufacturing. They realized that to make it work economically, they wouldn’t have to reinvent the reactor technology, only the production and sales processes. They decided to own their reactors and supply electricity, rather than supply the reactors themselves—operating more like today’s solar or wind developers. “It’s less about the technology being different,” says DeWitte, “than it is about approaching the entire process differently.”

That maverick streak raised eyebrows among nuclear veterans—and cash from Silicon Valley venture capitalists, including a boost from Y Combinator, where companies like Airbnb and Instacart got their start. In the eight years since, Oklo has distinguished itself from the competition by thinking smaller and moving faster. There are others competing in this space: NuScale, based in Oregon, is working to commercialize a reactor similar in design to existing nuclear plants, but constructed in 60-megawatt modules. TerraPower, founded by Bill Gates in 2006, has plans for a novel technology that uses its heat for energy storage, rather than to spin a turbine, which makes it an even more flexible option for electric grids that increasingly need that pliability. And X-energy, a Maryland-based firm that has received substantial funding from the U.S. Department of Energy, is developing 80-megawatt reactors that can also be grouped into “four-packs,” bringing them closer in size to today’s plants. Yet all are still years—and a billion dollars—away from their first installations. Oklo brags that its NRC application is 20 times shorter than NuScale’s, and its proposal cost 100 times less to develop. (Oklo’s proposed reactor would produce one-fortieth the power of NuScale’s.) NRC accepted Oklo’s application for review in March 2020, and regulations guarantee that process will be complete within three years. Oklo plans to power on around 2023, at a site at the Idaho National Laboratory, one of the U.S.’s oldest nuclear-research sites, and so already approved for such efforts. Then comes the hard part: doing it again and again, booking enough orders to justify building a factory to make many more reactors, driving costs down, and hoping politicians and activists worry more about the menace of greenhouse gases than the hazards of splitting atoms.

Nuclear-industry veterans remain wary. They have seen this all before. Westinghouse’s AP1000 reactor, first approved by the NRC in 2005, was touted as the flagship technology of Obama’s nuclear renaissance. It promised to be safer and simpler, using gravity rather than electricity-driven pumps to cool the reactor in case of an emergency—in theory, this would mitigate the danger of power outages, like the one that led to the Fukushima disaster. Its components could be constructed at a centralized location, and then shipped in giant pieces for assembly.

But all that was easier said than done. Westinghouse and its contractors struggled to manufacture the components according to nuclear’s mega-exacting requirements and in the end, only one AP1000 project in the U.S. actually happened: the Vogtle Electric Generating Plant in Georgia. Approved in 2012, its two reactors were expected at the time to cost $14 billion and be completed in 2016 and 2017, but costs have ballooned to $25 billion. The first will open, finally, next year.

Oklo and its competitors insist things are different this time, but they have yet to prove it. “Because we haven’t built one of them yet, we can promise that they’re not going to be a problem to build,” quips Gregory Jaczko, a former NRC chair who has since become the technology’s most biting critic. “So there’s no evidence of our failure.”

The Challenge

The cooling tower of the Hope Creek nuclear plant rises 50 stories above Artificial Island, New Jersey, built up on the marshy edge of the Delaware River. The three reactors here—one belonging to Hope Creek, and two run by the Salem Generating Station, which shares the site—generate an astonishing 3,465 megawatts of electricity, or roughly 40% of New Jersey’s total supply. Construction began in 1968, and was completed in 1986. Their closest human neighbors are across the river in Delaware. Otherwise the plant is surrounded by protected marshlands, pocked with radiation sensors and the occasional guard booth. Of the 1,500 people working here, around 100 are licensed reactor operators—a special designation given by the NRC, and held by fewer than 4,000 people in the country.

Among the newest in their ranks is Judy Rodriguez, an Elizabeth, N.J., native and another MIT grad. “Do I have your permission to enter?” she asks the operator on duty in the control room for the Salem Two reactor, which came online in 1981 and is capable of generating 1,200 megawatts of power. The operator opens a retractable belt barrier, like at an airport, and we step across a thick red line in the carpet. A horseshoe-shaped gray cabinet holds hundreds of buttons, glowing indicators and blinking lights, but a red LED counter at the center of the wall shows the most important number in the room: 944 megawatts, the amount of power the Salem Two reactor was generating that afternoon in September. Beside it is a circular pattern of square indicator lights showing the uranium fuel assemblies inside the core, deep inside the concrete domed containment building a couple hundred yards away. Salem Two has 764 of these constructions; each is about 6 inches sq and 15 ft. tall. They contain the source of the reactor’s energy, which are among the most guarded and controlled materials on earth. To make sure no one working there forgets that fact, a phrase is painted on walls all around the plant: “Line of Sight to the Reactor.”

As the epitome of critical infrastructure, this station has been buffeted by the crises the U.S. has suffered in the past few decades. After 9/11, the three reactors here absorbed nearly $100 million in security upgrades. Everyone entering the plant passes through metal- and explosives detectors, and radiation detectors on the way out. Walking between the buildings entails crossing a concrete expanse beneath high bullet resistant enclosures (BREs). The plant has a guard corp that has more members than any in New Jersey besides the state police, and federal NRC rules mean that they don’t have to abide by state limitations on automatic weapons.

The scale and complexity of the operation is staggering—and expensive. ”The place you’re sitting at right now costs us about $1.5 million to $2 million a day to run,” says Ralph Izzo, president and CEO of PSEG, New Jersey’s public utility company, which owns and operates the plants. “If those plants aren’t getting that in market, that’s a rough pill to swallow.” In 2019, the New Jersey Board of Public Utilities agreed to $300 million in annual subsidies to keep the three reactors running. The justification is simple: if the state wants to meet its carbon-reduction goals, keeping the plants online is essential, given that they supply 90% of the state’s zero-carbon energy. In September, the Illinois legislature came to the same conclusion as New Jersey, approving almost $700 million over five years to keep two existing nuclear plants open. The bipartisan infrastructure bill includes $6 billion in additional support (along with nearly $10 billion for development of future reactors). Even more is expected in the broader Build Back Better bill.

These subsidies—framed in both states as “carbon mitigation credits”—acknowledge the reality that nuclear plants cannot, on their own terms, compete economically with natural gas or coal. “There has always been a perception of this technology that never was matched by reality,” says Jaczko. The subsidies also show how climate change has altered the equation, but not decisively enough to guarantee nuclear’s future. Lawmakers and energy companies are coming to terms with nuclear’s new identity as clean power, deserving of the same economic incentives as solar and wind. Operators of existing plants want to be compensated for producing enormous amounts of carbon free energy, according to Josh Freed, of Third Way, a Washington, D.C., think tank that champions nuclear power as a climate solution. “There’s an inherent benefit to providing that, and it should be paid for.” For the moment, that has brought some assurance to U.S. nuclear operators of their future prospects. “A megawatt of zero-carbon electricity that’s leaving the grid is no different from a new megawatt of zero carbon electricity coming onto the grid,” says Kathleen Barrón, senior vice president of government and regulatory affairs and public policy at Exelon, the nation’s largest operator of nuclear reactors.

Globally, nations are struggling with the same equation. Germany and Japan both shuttered many of their plants after the Fukushima disaster, and saw their progress at reducing carbon emissions suffer. Germany has not built new renewables fast enough to meet its electricity needs, and has made up the gap with dirty coal and natural gas imported from Russia. Japan, under international pressure to move more aggressively to meet its carbon targets, announced in October that it would work to restart its reactors. “Nuclear power is indispensable when we think about how we can ensure a stable and affordable electricity supply while addressing climate change,” said Koichi Hagiuda, Japan’s minister of economy, trade and industry, at an October news conference. China is building more new nuclear reactors than any other country, with plans for as many as 150 by the 2030s, at an estimated cost of nearly half a trillion dollars. Long before that, in this decade, China will overtake the U.S. as the operator of the world’s largest nuclear-energy system.

The future won’t be decided by choosing between nuclear or solar power. Rather, it’s a technically and economically complicated balance of adding as much renewable energy as possible while ensuring a steady supply of electricity. At the moment, that’s easy. “There is enough opportunity to build renewables before achieving penetration levels that we’re worried about the grid having stability,” says PSEG’s Izzo. New Jersey, for its part, is aiming to add 7,500 megawatts of offshore wind by 2035—or about the equivalent of six new Salem-sized reactors. The technology to do that is readily at hand—Kansas alone has about that much wind power installed already.

The challenge comes when renewables make up a greater proportion of the electricity supply—or when the wind stops blowing. The need for “firm” generation becomes more crucial. “You cannot run our grid solely on the basis of renewable supply,” says Izzo. “One needs an interseasonal storage solution, and no one has come up with an economic interseasonal storage solution.”

Existing nuclear’s best pitch—aside from the very fact it exists already—is its “capacity factor,” the industry term for how often a plant meets its full energy making potential. For decades, nuclear plants struggled with outages and long maintenance periods. Today, improvements in management and technology make them more likely to run continuously—or “breaker to breaker”—between planned refuelings, which usually occur every 18 months, and take about a month. At Salem and Hope Creek, PSEG hangs banners in the hallways to celebrate each new record run without a maintenance breakdown. That improvement stretches across the industry. “If you took our performance back in the mid-’70s, and then look at our performance today, it’s equivalent to having built 30 new reactors,” says Maria Korsnick, president and CEO of the Nuclear Energy Institute, the industry’s main lobbying organization. That improved reliability has become its major calling card today.

Over the next 20 years, nuclear plants will need to develop new tricks. “One of the new words in our vocabulary is flexibility,” says Marilyn Kray, vice president of nuclear strategy and development at Exelon, which operates 21 reactors. “Flexibility not only in the existing plants, but in the designs of the emerging ones, to make them even more flexible and adaptable to complement renewables.” Smaller plants can adapt more easily to the grid, but they can also serve new customers, like providing energy directly to factories, steel mills or desalination plants.

Bringing those small plants into operation could be worth it, but it won’t be easy.”You can’t just excuse away the thing that’s at the center of all of it, which is it’s just a hard technology to build,” says Jaczko, the former NRC chair. “It’s difficult to make these plants, it’s difficult to design them, it’s difficult to engineer them, it’s difficult to construct them. At some point, that’s got to be the obvious conclusion to this technology.”

But the equally obvious conclusion is we can no longer live without it. “The reality is, you have to really squint to see how you get to net zero without nuclear,” says Third Way’s Freed. “There’s a lot of wishful thinking, a lot of fingers crossed.”

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Why it’s risky for financial firms to rely on mobile device authentication

Source: Finance Derivative

Niall McConachie, regional director (UK & Ireland) at Yubico

Using mobile phones to sign into online services can offer people a sense of security and convenience. However, when their devices are damaged, lost, or stolen, they can quickly experience why relying on mobile authentication methods is not the best choice when it comes to protecting their online identities.

Despite this, many financial firms and institutions in the UK continue to encourage their customers and employees to use this form of digital authentication when accessing sensitive data. With cyber attacks being the most cited risk to the UK financial system, it is important that leaders understand the increased risks that they take on with continued use of ineffective authentication and poor cyber hygiene practices.

Limitations of mobile devices and passwords

Aside from being easily lost, stolen, or broken, the effectiveness of mobile-based authentication can be limited depending on the user’s location. For example, depending on where the mobile devices are being used, people may not have the reception needed to authenticate into an account. Additionally, they could be locked out of their accounts simply due to the device’s battery running out. However, even without these issues, mobile devices still pose considerable cybersecurity risks.

Indeed, findings from our recent State of Global Enterprise Authentication Survey, show that mobile SMS-based authentication (20 percent), push authenticator apps or mobile one-time passcodes (OTPs) (23 percent), and passwords (23 percent) are believed to be the most secure forms of digital authentication by UK respondents. As financial firms use these methods so often, it is understandable why customers and employees would come to this assumption. However, this is a misconception.

While any form of authentication is better than none, passwords and mobile-based authentication methods – including SMS verification, OTPs, and digital authentication apps – are all vulnerable to many modern cybersecurity threats. These include SIM swapping, phishing, password spraying, man-in-the-middle (MitM) attacks, and ransomware attacks which can all lead to possible data breaches, imposing serious consequences on UK financial organisations.

Improved cyber hygiene practices and training for employees

According to the survey, the primary ways that UK employees signed into their business accounts were with usernames and passwords (53 percent), mobile SMS-based authentication (24 percent), and push authenticator apps or mobile OTPs (19 percent),  indicating that UK employees are not choosing the best form of authentication methods. These practices leave their accounts easily compromised by bad actors. 

Additionally, it is important to note that no authentication solution can be fully effective in mitigating emerging cyber threats if used alongside poor cyber hygiene practices, which play a significant role in reducing an organisation’s cyber resiliency against external threats.

Overall, it appears that UK organisations are not properly enforcing best-practice cyber training amongst their internal staff. Findings show that only 42 percent of respondents are required to go through frequent cybersecurity training. The report also revealed significant lapses in employees’ cyber-hygiene practices. For instance, over the previous 12 months, UK respondents admitted to using a work-issued device for personal use (49 percent), allowing their work-issued device to be used by someone else (33 percent), not reporting a phishing attempt (31 percent), having an account reset due to lost or forgotten credentials (58 percent), and using a personal device for work (58 percent).

These poor habits should be concerning for finance firms because if an employee uses a personal device for work, bad actors can compromise that device and use it as a point of access to target their employer. As 73 percent of UK respondents claimed to have experienced a cyber attack in their personal lives within the previous 12 months – this and other similar scenarios are highly possible.

Moreso, the combination of weak authentication methods and poor digital habits make organisations especially vulnerable to cyber attacks which can directly target their customers, employees, and third party partners as well. Therefore, better cyber hygiene practices should be enforced on a regular basis to protect organisations fully and effectively from emerging threats.

Benefits of alternative authentication methods

For finance businesses looking for alternative methods, it is important to note that there are some forms of multi-factor authentication (MFA) and two-factor authentication (2FA) that are more robust than others. For example, some require users to authenticate with either a hardware security key or identity credential that is unique to the individual user like a fingerprint. With the help of FIDO protocols – globally recognised standards of public key cryptography techniques to deliver stronger authentication – stronger authentication methods like these provide users with a seamless experience when accessing their digital accounts by removing the need for passwords or mobile devices.

The National Cyber Security Centre (NCSC), recommends hardware-based security keys as a phishing-resistant solution against modern cyber attacks. In addition, a growing number of global companies and UK banks have implemented passwordless authentication. Apple, Barclays, Co-operative Bank, Google, HSBC, Microsoft, NatWest, Twitter, and the US Government are just a few reputable organisations which have opted for passwordless authentication.

Customers and staff should not be solely responsible for adjusting their own cybersecurity practices. It is also up to organisations to enhance their digital security by implementing phishing-resistant passwordless solutions. Whether using biometric identifiers or hardware security keys, these solutions are more effective and user-friendly than conventional authentication methods. They also offer robust authentication across multiple devices and accounts, reducing the number of times a user needs to sign in. However, most importantly, implementing business-wide passwordless solutions helps to reinforce an organisation’s security posture and significantly decreases the risk of emerging attacks.

Mobile-based authentication, OTPs, and passwords are some of the most widely used authentication methods but are not the most secure. As the finance sector continues to prioritise passwordless authentication, this will likely change customers’ and employees’ perceptions of what secure authentication truly is. Ultimately, providing users with the most secure authentication possible should be a top priority. With it, financial firms can experience the long-term benefits of improved data security, better user experience, and considerable ROI.



982/750 word minimum

Continue Reading

Business

Unlocking the Power of Data: Revolutionising Business Success in the Financial Services Sector

Source: Finance Derivative

Suki Dhuphar, Head of EMEA, Tamr

The financial services (FS) sector operates within an immensely data-abundant landscape. But it’s well-known that many organisations in the sector struggle to make data-driven decisions because they lack access to the right data to make decisions at the right time.

As the sector strives for a data-driven approach, companies focus on democratising data, granting non-technical users the ability to work with and leverage data for informed decision-making. However, dirty data, riddled with errors and inconsistencies, can lead to flawed analytics and decision-making. Siloed data across departments like Marketing, Sales, Operations, or R&D exacerbates this issue. Breaking down these barriers is essential for effective data democratisation and achieving accurate insights for decision-making.

An antidote to dirty, disconnected data

Overcoming the challenges presented by dirty, disconnected data is not a new problem. But, there are new solutions – such as shifting strategies to focus on data products – which are proven to deliver great results. But, what is a data product?

Data products are high-quality, accessible datasets that organisations use to solve business challenges. Data products are comprehensive, clean, and continuously updated. They make data tangible to serve specific purposes defined by consumers and provide value because they are easy to find and use. For example, an investment firm can benefit from data products to gain insights into market trends and attract more capital. These offer a scalable solution for connecting alternative data sources, providing accurate and continuously updated views of portfolio companies. Using machine learning (ML) based technology enables the data product to adapt to new data sources, giving a firm’s partners confidence in their investment decisions.

But, before companies can reap the benefits of data products, the development of a robust data product strategy is a must.

Where to begin?

Prior to embarking on a data product strategy, it is imperative to establish clear-cut objectives that align with your organisation’s overarching business goals. Taking an incremental approach enables you to make a real impact against a specific objective – such as streamlining operations to enhance cost efficiency or reshaping business portfolios to drive growth – by starting with a more manageable goal and then building upon it as the use case is proved. For companies that find themselves uncertain about where to begin their move to data products, tackling your customer data is a good place to start for some quick wins to increase the success of the customer experience programmes.

Getting a good grasp on data

Once an objective is in place, it’s time for an organisation to assess its capabilities for executing the data product strategy. To do this, you need to dig into the nitty-gritty details like where the data is, how accurate and complete it is, how often it gets updated, and how well it’s integrated across different departments. This will give a solid grasp of the actual quality of the data and help allocate resources more efficiently. At this stage, you should also think about which stakeholders from across the business from leadership to IT will need to be involved in the process and how.

Once that’s covered, you can start putting together a skilled team and assigning responsibilities to kick-off the creation and management of a comprehensive data platform that spans all relevant departments. This process also helps spot any gaps early on, so you can focus on targeted initiatives.

Identifying the problem you will solve

Now let’s move on to the next step in our data product strategy. Here we need to identify a specific problem or challenge that is commonly faced in your organisation. It’s likely that leaders in different departments, like R&D or procurement, encounter obstacles that hinder their objectives that could be overcome with better insight and information. By defining a clear use case, you will build a real solution to a challenge they are facing rather than a data product for the sake of having data. This will be an impactful case study for your entire organisation to understand the potential benefits of data products and increase appetite for future projects.

Getting buy-in from the business

Once you have identified the problem you want to solve, you need to secure the funding, support, and resources to move the project ahead. To do that, you must present a practical roadmap that shows how you will quickly deliver value. You should also showcase how to improve it over time once the initial use case is proven.

The plan should map how you will measure success effectively with specific indicators (such as KPIs) that are closely tied to business goals. These indicators will give you a benchmark of what success looks like so you can clearly show when you’ve delivered it.

Getting the most out of your data product

Once you’ve got the green light – and the funds – it’s time to put your plan into action by creating a basic version of your data product, also known as a minimum viable data product (MVDP). By starting small and gradually enhancing with each new release you are putting yourself in the best stead to encourage adoption and also (coming back to our iterative approach) help you secure more resources and funding down the line.

To make the most of your data product, it’s essential to tap into the knowledge and experience of business partners as they know how to make the most of the data product and integrate it into existing workflows. Additionally, collecting feedback and using it to improve future releases will bring even more value to end users in the business and, in turn, your customers.

Unlocking the power of data (products)

It’s crucial for companies in FS to make the most of the huge amount of data they have at their disposal. It simply doesn’t make sense to leave this data tapped and not use it to solve real challenges for end users in the business and, in turn, improve the customer experience! By adopting effective strategies for data products, FS organisations can start to maximise the incredible value of their data.

Continue Reading

Business

HOW SMALL BUSINESSES CAN FIGHT BACK AGAINST POOR PAYMENT PRACTICES

Source: Finance Derivative

SMEs across the UK are facing a challenging economic environment and late payments pose a severe challenge to maintaining cash flow. Here, Andrea Dunlop, managing director at Access PaySuite, explores the challenges facing small and medium sized businesses, the risks that late payments carry, and what can be done to secure timely payments, in full.

It’s estimated that UK businesses are currently owed more than £23.4bn in outstanding invoices. For all businesses, managing the outward flow of products and services with a steady incoming cash flow is a fine balance – with unexpected disruptions and complications capable of causing catastrophic problems.

Late and delayed payments have been identified as a significant challenge for SMEs – an issue that has scaled over recent years. In fact, in its latest report, the Federation for Small Businesses (FSB) stated that the UK is “almost unique in being a place where it is acceptable to pay small businesses late”.

The FSB also states that this “will remain the case without further action” and, as such, has called for government action to put a stop to these damaging trends.

Andrea Dunlop

Small businesses form a vital part of the economic ecosystem – in 2022, it was estimated that 99% of UK businesses were SMEs – so poor payment systems not only present a very real threat for individual businesses, but for the UK economy as a whole.

Despite this strong case for urgent action to be taken, changes to legislation can be a slow process and, in the face of ongoing economic pressure, small businesses need more immediate solutions.

Although businesses are at the liberty of their customers and clients, there are a number of actionable steps SMEs can take to increase the rate of prompt and complete payments.

The impact of late payments for SMEs

Published in Q4 2022, research published by the ICAEW demonstrates that around half of invoices issued by small businesses are paid late.

More often than not, small businesses operate within a chain of regular suppliers and customers. These chains can include multiple business links, stretching across sectors and regions. As a heavily interwoven ecosystem, if one ‘link’ in the chain is damaged by late payments and unreliable cash flow, the delays can quickly escalate and create a domino effect of complications across the whole system.

With a lack of consistent income, SMEs are more likely to be prevented from paying their overheads and suppliers on time.

As late payments add up and push multiple businesses into a negative cash flow, the problem can continue to snowball.

Simply put, extended periods of unreliable and heavily reduced payments put whole supply chains of companies in very dangerous financial positions – especially as running costs remain high.

Combined, the complexities arising from late payments and the vast scale of the issue,  demonstrates a clear need for systemic change.

Current government action

At the end of January, the government published a review of the reporting of payment practices first introduced in 2017 .

This review stated that the government is committed to “stamp[ing] out the worst kind of poor payment practices within the business community”.

The 2017 Payment Practices and Performance Regulations require all large UK companies to report publicly on their payment policies, practices and performance, to ensure accountability.

Following its review, a new consultation has been launched, seeking the opinion of business owners on current regulations – asking whether this existing policy should extend beyond its current expiry date, 6 April 2024. This consultation is part of a wider examination of payments in the UK.

Delving into issues including the emotional and psychological impact of late payments on small business owners – as well as analysing how banks and technology can help – the government’s review is a welcome development, but SMEs need to take more immediate action to strengthen their payment processes.

What can SMEs do?

With the government consultation finalising at the end of April, the future of the payment landscape in the UK will soon be made clearer – but what actions can SMEs take to immediately strengthen their payment processes?

For many SMEs, payment systems are low down the list of priorities, and the fear of disruption or additional costs can lead many to turning a blind eye to problems with their existing systems. But, with challenges around cash flow increasing, investing in a flexible and comprehensive payment system could be an incredibly worthwhile investment.

Issuing regular invoices takes a lot of time, and when working across different clients with different payment frequencies invoicing can lead to unnecessary complexities.

Instead, systems that enable customers to set up direct debits ensure payments are completed on a set date, reduce additional paperwork and still allow bespoke schedules for each client or customer to be arranged.

In many SMEs, missed payments can easily get lost in piles of paperwork and human-error can result in problems down the line. When using digital payment systems, should a missed payment occur, automated capabilities ensure the issue is flagged, and any outstanding challenges can be resolved in a timely manner.

With payments and invoicing automatically managed in a centralised database, countless hours that would otherwise be spent on repetitive and laborious administrative work are saved.

As well as reducing the amount of staff time spent managing processes and tracking financial activity, a reliable payment system delivers benefits for customers too, and contributes towards greater service and boosting brand loyalty.

In the coming weeks and months, new government guidance should clarify legislative expectations for businesses regarding payments. But, with smart investment in specialist software solutions, our country’s vital SMEs can take the necessary safeguarding steps to boost payment security and thrive through this tough financial time.

Continue Reading

Copyright © 2021 Futures Parity.