Business
A New Generation of Nuclear Reactors Could Hold the Key to a Green Future
Source: Time
On a conference-room whiteboard in the heart of Silicon Valley, Jacob DeWitte sketches his startup’s first product. In red marker, it looks like a beer can in a Koozie, stuck with a crazy straw. In real life, it will be about the size of a hot tub, and made from an array of exotic materials, like zirconium and uranium. Under carefully controlled conditions, they will interact to produce heat, which in turn will make electricity—1.5 megawatts’ worth, enough to power a neighborhood or a factory. DeWitte’s little power plant will run for a decade without refueling and, amazingly, will emit no carbon. ”It’s a metallic thermal battery,” he says, coyly. But more often DeWitte calls it by another name: a nuclear reactor.
Fission isn’t for the faint of heart. Building a working reactor—even a very small one—requires precise and painstaking efforts of both engineering and paper pushing. Regulations are understandably exhaustive. Fuel is hard to come by—they don’t sell uranium at the Gas-N-Sip. But DeWitte plans to flip the switch on his first reactor around 2023, a mere decade after co-founding his company, Oklo. After that, they want to do for neighborhood nukes what Tesla has done for electric cars: use a niche and expensive first version as a stepping stone toward cheaper, bigger, higher-volume products. In Oklo’s case, that means starting with a “microreactor” designed for remote communities, like Alaskan villages, currently dependent on diesel fuel trucked, barged or even flown in, at an exorbitant expense. Then building more and incrementally larger reactors until their zero-carbon energy source might meaningfully contribute to the global effort to reduce fossil-fuel emissions.
At global climate summits, in the corridors of Congress and at statehouses around the U.S., nuclear power has become the contentious keystone of carbon reduction plans. Everyone knows they need it. But no one is really sure they want it, given its history of accidents. Or even if they can get it in time to reach urgent climate goals, given how long it takes to build. Oklo is one of a growing handful of companies working to solve those problems by putting reactors inside safer, easier-to-build and smaller packages. None of them are quite ready to scale to market-level production, but given the investments being made into the technology right now, along with an increasing realization that we won’t be able to shift away from fossil fuels without nuclear power, it’s a good bet that at least one of them becomes a game changer.
If existing plants are the energy equivalent of a 2-liter soda bottle, with giant, 1,000-megawatt-plus reactors, Oklo’s strategy is to make reactors by the can. The per-megawatt construction costs might be higher, at least at first. But producing units in a factory would give the company a chance to improve its processes and to lower costs. Oklo would pioneer a new model. Nuclear plants need no longer be bet-the-company big, even for giant utilities. Venture capitalists can get behind the potential to scale to a global market. And climate hawks should fawn over a zero-carbon energy option that complements burgeoning supplies of wind and solar power. Unlike today’s plants, which run most efficiently at full blast, making it challenging for them to adapt to a grid increasingly powered by variable sources (not every day is sunny, or windy), the next generation of nuclear technology wants to be more flexible, able to respond quickly to ups and downs in supply and demand.
Engineering these innovations is hard. Oklo’s 30 employees are busy untangling the knots of safety and complexity that sent the cost of building nuclear plants to the stratosphere and all but halted their construction in the U.S. ”If this technology was brand-‘new’—like if fission was a recent breakthrough out of a lab, 10 or 15 years ago—we’d be talking about building our 30th reactor,” DeWitte says.
But fission is an old, and fraught, technology, and utility companies are scrambling now to keep their existing gargantuan nuclear plants open. Economically, they struggle to compete with cheap natural gas, along with wind and solar, often subsidized by governments. Yet climate-focused nations like France and the U.K. that had planned to phase out nuclear are instead doubling down. (In October, French President Emmanuel Macron backed off plans to close 14 reactors, and in November, he announced the country would instead start building new ones.) At the U.N. climate summit in Glasgow, the U.S. announced its support for Poland, Kenya, Ukraine, Brazil, Romania and Indonesia to develop their own new nuclear plants—while European negotiators assured that nuclear energy counts as “green.” All the while, Democrats and Republicans are (to everyone’s surprise) often aligned on nuclear’s benefits—and, in many cases, putting their powers of the purse behind it, both to keep old plants open in the U.S. and speed up new technologies domestically and overseas.
It makes for a decidedly odd moment in the life of a technology that already altered the course of one century, and now wants to make a difference in another. There are 93 operating nuclear reactors in the U.S.; combined, they supply 20% of U.S. electricity, and 50% of its carbon-free electricity. Nuclear should be a climate solution, satisfying both technical and economic needs. But while the existing plants finally operate with enviable efficiency (after 40 years of working out the kinks), the next generation of designs is still a decade away from being more than a niche player in our energy supply. Everyone wants a steady supply of electricity, without relying on coal. Nuclear is paradoxically right at hand, and out of reach.
For that to change, “new nuclear” has to emerge before the old nuclear plants recede. It has to keep pace with technological improvements in other realms, like long-term energy storage, where each incremental improvement increases the potential for renewables to supply more of our electricity. It has to be cheaper than carbon-capture technologies, which would allow flexible gas plants to operate without climate impacts (but are still too expensive to build at scale). And finally it has to arrive before we give up—before the spectre of climate catastrophe creates a collective “doomerism,” and we stop trying to change.
Not everyone thinks nuclear can reinvent itself in time. “When it comes to averting the imminent effects of climate change, even the cutting edge of nuclear technology will prove to be too little, too late,” predicts Allison Macfarlane, former chair of the U.S. Nuclear Regulatory Commission (NRC)—the government agency singularly responsible for permitting new plants. Can a stable, safe, known source of energy rise to the occasion, or will nuclear be cast aside as too expensive, too risky and too late?
Trying Again
Nuclear began in a rush. In 1942, in the lowest mire of World War II, the U.S. began the Manhattan Project, the vast effort to develop atomic weapons. It employed 130,000 people at secret sites across the country, the most famous of which was Los Alamos Laboratory, near Albuquerque, N.M., where Robert Oppenheimer led the design and construction of the first atomic bombs. DeWitte, 36, grew up nearby. Even as a child of the ’90s, he was steeped in the state’s nuclear history, and preoccupied with the terrifying success of its engineering and the power of its materials. “It’s so incredibly energy dense,” says DeWitte. “A golf ball of uranium would power your entire life!”
DeWitte has taken that bromide almost literally. He co-founded Oklo in 2013 with Caroline Cochran, while both were graduate students in nuclear engineering at the Massachusetts Institute of Technology. When they arrived in Cambridge, Mass., in 2007 and 2008, the nuclear industry was on a precipice. Then presidential candidate Barack Obama espoused a new eagerness to address climate change by reducing carbon emissions—which at the time meant less coal, and more nuclear. (Wind and solar energy were still a blip.) It was an easy sell. In competitive power markets, nuclear plants were profitable. The 104 operating reactors in the U.S. at the time were running smoothly. There hadn’t been a major accident since Chernobyl, in 1986.
The industry excitedly prepared for a “nuclear renaissance.” At the peak of interest, the NRC had applications for 30 new reactors in the U.S. Only two would be built. The cheap natural gas of the fracking boom began to drive down electricity prices, razing nuclear’s profits. Newly subsidized renewables, like wind and solar, added even more electricity generation, further saturating the markets. When on March 11, 2011, an earthquake and subsequent tsunami rolled over Japan’s Fukushima Daiichi nuclear power plant, leading to the meltdown of all three of its reactors and the evacuation of 154,000 people, the industry’s coffin was fully nailed. Not only would there be no renaissance in the U.S, but the existing plants had to justify their safety. Japan shut down 46 of its 50 operating reactors. Germany closed 11 of its 17. The U.S. fleet held on politically, but struggled to compete economically. Since Fukushima, 12 U.S. reactors have begun decommissioning, with three more planned.
At MIT, Cochran and DeWitte—who were teaching assistants together for a nuclear reactor class in 2009, and married in 2011—were frustrated by the setback. ”It was like, There’re all these cool technologies out there. Let’s do something with it,” says Cochran. But the nuclear industry has never been an easy place for innovators. In the U.S., its operational ranks have long been dominated by “ring knockers”—the officer corps of the Navy’s nuclear fleet, properly trained in the way things are done, but less interested in doing them differently. Governments had always kept a tight grip on nuclear; for decades, the technology was under shrouds. The personal computing revolution, and then the wild rise of the Internet, further drained engineering talent. From DeWitte and Cochran’s perspective, the nuclear-energy industry had already ossified by the time Fukushima and fracking totally brought things to a halt. “You eventually got to the point where it’s like, we have to try something different,” DeWitte says.
He and Cochran began to discreetly convene their MIT classmates for brainstorming sessions. Nuclear folks tend to be dogmatic about their favorite method of splitting atoms, but they stayed agnostic. “I didn’t start thinking we had to do everything differently,” says DeWitte. Rather, they had a hunch that marginal improvements might yield major results, if they could be spread across all of the industry’s usual snags—whether regulatory approaches, business models, the engineering of the systems themselves, or the challenge of actually constructing them.
In 2013, Cochran and DeWitte began to rent out the spare room in their Cambridge home on Airbnb. Their first guests were a pair of teachers from Alaska. The remote communities they taught in were dependent on diesel fuel for electricity, brought in at enormous cost. That energy scarcity created an opportunity: in such an environment, even a very expensive nuclear reactor might still be cheaper than the current system. The duo targeted a price of $100 per megawatt hour, more than double typical energy costs. They imagined using this high-cost early market as a pathway to scale their manufacturing. They realized that to make it work economically, they wouldn’t have to reinvent the reactor technology, only the production and sales processes. They decided to own their reactors and supply electricity, rather than supply the reactors themselves—operating more like today’s solar or wind developers. “It’s less about the technology being different,” says DeWitte, “than it is about approaching the entire process differently.”
That maverick streak raised eyebrows among nuclear veterans—and cash from Silicon Valley venture capitalists, including a boost from Y Combinator, where companies like Airbnb and Instacart got their start. In the eight years since, Oklo has distinguished itself from the competition by thinking smaller and moving faster. There are others competing in this space: NuScale, based in Oregon, is working to commercialize a reactor similar in design to existing nuclear plants, but constructed in 60-megawatt modules. TerraPower, founded by Bill Gates in 2006, has plans for a novel technology that uses its heat for energy storage, rather than to spin a turbine, which makes it an even more flexible option for electric grids that increasingly need that pliability. And X-energy, a Maryland-based firm that has received substantial funding from the U.S. Department of Energy, is developing 80-megawatt reactors that can also be grouped into “four-packs,” bringing them closer in size to today’s plants. Yet all are still years—and a billion dollars—away from their first installations. Oklo brags that its NRC application is 20 times shorter than NuScale’s, and its proposal cost 100 times less to develop. (Oklo’s proposed reactor would produce one-fortieth the power of NuScale’s.) NRC accepted Oklo’s application for review in March 2020, and regulations guarantee that process will be complete within three years. Oklo plans to power on around 2023, at a site at the Idaho National Laboratory, one of the U.S.’s oldest nuclear-research sites, and so already approved for such efforts. Then comes the hard part: doing it again and again, booking enough orders to justify building a factory to make many more reactors, driving costs down, and hoping politicians and activists worry more about the menace of greenhouse gases than the hazards of splitting atoms.
Nuclear-industry veterans remain wary. They have seen this all before. Westinghouse’s AP1000 reactor, first approved by the NRC in 2005, was touted as the flagship technology of Obama’s nuclear renaissance. It promised to be safer and simpler, using gravity rather than electricity-driven pumps to cool the reactor in case of an emergency—in theory, this would mitigate the danger of power outages, like the one that led to the Fukushima disaster. Its components could be constructed at a centralized location, and then shipped in giant pieces for assembly.
But all that was easier said than done. Westinghouse and its contractors struggled to manufacture the components according to nuclear’s mega-exacting requirements and in the end, only one AP1000 project in the U.S. actually happened: the Vogtle Electric Generating Plant in Georgia. Approved in 2012, its two reactors were expected at the time to cost $14 billion and be completed in 2016 and 2017, but costs have ballooned to $25 billion. The first will open, finally, next year.
Oklo and its competitors insist things are different this time, but they have yet to prove it. “Because we haven’t built one of them yet, we can promise that they’re not going to be a problem to build,” quips Gregory Jaczko, a former NRC chair who has since become the technology’s most biting critic. “So there’s no evidence of our failure.”
The Challenge
The cooling tower of the Hope Creek nuclear plant rises 50 stories above Artificial Island, New Jersey, built up on the marshy edge of the Delaware River. The three reactors here—one belonging to Hope Creek, and two run by the Salem Generating Station, which shares the site—generate an astonishing 3,465 megawatts of electricity, or roughly 40% of New Jersey’s total supply. Construction began in 1968, and was completed in 1986. Their closest human neighbors are across the river in Delaware. Otherwise the plant is surrounded by protected marshlands, pocked with radiation sensors and the occasional guard booth. Of the 1,500 people working here, around 100 are licensed reactor operators—a special designation given by the NRC, and held by fewer than 4,000 people in the country.
Among the newest in their ranks is Judy Rodriguez, an Elizabeth, N.J., native and another MIT grad. “Do I have your permission to enter?” she asks the operator on duty in the control room for the Salem Two reactor, which came online in 1981 and is capable of generating 1,200 megawatts of power. The operator opens a retractable belt barrier, like at an airport, and we step across a thick red line in the carpet. A horseshoe-shaped gray cabinet holds hundreds of buttons, glowing indicators and blinking lights, but a red LED counter at the center of the wall shows the most important number in the room: 944 megawatts, the amount of power the Salem Two reactor was generating that afternoon in September. Beside it is a circular pattern of square indicator lights showing the uranium fuel assemblies inside the core, deep inside the concrete domed containment building a couple hundred yards away. Salem Two has 764 of these constructions; each is about 6 inches sq and 15 ft. tall. They contain the source of the reactor’s energy, which are among the most guarded and controlled materials on earth. To make sure no one working there forgets that fact, a phrase is painted on walls all around the plant: “Line of Sight to the Reactor.”
As the epitome of critical infrastructure, this station has been buffeted by the crises the U.S. has suffered in the past few decades. After 9/11, the three reactors here absorbed nearly $100 million in security upgrades. Everyone entering the plant passes through metal- and explosives detectors, and radiation detectors on the way out. Walking between the buildings entails crossing a concrete expanse beneath high bullet resistant enclosures (BREs). The plant has a guard corp that has more members than any in New Jersey besides the state police, and federal NRC rules mean that they don’t have to abide by state limitations on automatic weapons.
The scale and complexity of the operation is staggering—and expensive. ”The place you’re sitting at right now costs us about $1.5 million to $2 million a day to run,” says Ralph Izzo, president and CEO of PSEG, New Jersey’s public utility company, which owns and operates the plants. “If those plants aren’t getting that in market, that’s a rough pill to swallow.” In 2019, the New Jersey Board of Public Utilities agreed to $300 million in annual subsidies to keep the three reactors running. The justification is simple: if the state wants to meet its carbon-reduction goals, keeping the plants online is essential, given that they supply 90% of the state’s zero-carbon energy. In September, the Illinois legislature came to the same conclusion as New Jersey, approving almost $700 million over five years to keep two existing nuclear plants open. The bipartisan infrastructure bill includes $6 billion in additional support (along with nearly $10 billion for development of future reactors). Even more is expected in the broader Build Back Better bill.
These subsidies—framed in both states as “carbon mitigation credits”—acknowledge the reality that nuclear plants cannot, on their own terms, compete economically with natural gas or coal. “There has always been a perception of this technology that never was matched by reality,” says Jaczko. The subsidies also show how climate change has altered the equation, but not decisively enough to guarantee nuclear’s future. Lawmakers and energy companies are coming to terms with nuclear’s new identity as clean power, deserving of the same economic incentives as solar and wind. Operators of existing plants want to be compensated for producing enormous amounts of carbon free energy, according to Josh Freed, of Third Way, a Washington, D.C., think tank that champions nuclear power as a climate solution. “There’s an inherent benefit to providing that, and it should be paid for.” For the moment, that has brought some assurance to U.S. nuclear operators of their future prospects. “A megawatt of zero-carbon electricity that’s leaving the grid is no different from a new megawatt of zero carbon electricity coming onto the grid,” says Kathleen Barrón, senior vice president of government and regulatory affairs and public policy at Exelon, the nation’s largest operator of nuclear reactors.
Globally, nations are struggling with the same equation. Germany and Japan both shuttered many of their plants after the Fukushima disaster, and saw their progress at reducing carbon emissions suffer. Germany has not built new renewables fast enough to meet its electricity needs, and has made up the gap with dirty coal and natural gas imported from Russia. Japan, under international pressure to move more aggressively to meet its carbon targets, announced in October that it would work to restart its reactors. “Nuclear power is indispensable when we think about how we can ensure a stable and affordable electricity supply while addressing climate change,” said Koichi Hagiuda, Japan’s minister of economy, trade and industry, at an October news conference. China is building more new nuclear reactors than any other country, with plans for as many as 150 by the 2030s, at an estimated cost of nearly half a trillion dollars. Long before that, in this decade, China will overtake the U.S. as the operator of the world’s largest nuclear-energy system.
The future won’t be decided by choosing between nuclear or solar power. Rather, it’s a technically and economically complicated balance of adding as much renewable energy as possible while ensuring a steady supply of electricity. At the moment, that’s easy. “There is enough opportunity to build renewables before achieving penetration levels that we’re worried about the grid having stability,” says PSEG’s Izzo. New Jersey, for its part, is aiming to add 7,500 megawatts of offshore wind by 2035—or about the equivalent of six new Salem-sized reactors. The technology to do that is readily at hand—Kansas alone has about that much wind power installed already.
The challenge comes when renewables make up a greater proportion of the electricity supply—or when the wind stops blowing. The need for “firm” generation becomes more crucial. “You cannot run our grid solely on the basis of renewable supply,” says Izzo. “One needs an interseasonal storage solution, and no one has come up with an economic interseasonal storage solution.”
Existing nuclear’s best pitch—aside from the very fact it exists already—is its “capacity factor,” the industry term for how often a plant meets its full energy making potential. For decades, nuclear plants struggled with outages and long maintenance periods. Today, improvements in management and technology make them more likely to run continuously—or “breaker to breaker”—between planned refuelings, which usually occur every 18 months, and take about a month. At Salem and Hope Creek, PSEG hangs banners in the hallways to celebrate each new record run without a maintenance breakdown. That improvement stretches across the industry. “If you took our performance back in the mid-’70s, and then look at our performance today, it’s equivalent to having built 30 new reactors,” says Maria Korsnick, president and CEO of the Nuclear Energy Institute, the industry’s main lobbying organization. That improved reliability has become its major calling card today.
Over the next 20 years, nuclear plants will need to develop new tricks. “One of the new words in our vocabulary is flexibility,” says Marilyn Kray, vice president of nuclear strategy and development at Exelon, which operates 21 reactors. “Flexibility not only in the existing plants, but in the designs of the emerging ones, to make them even more flexible and adaptable to complement renewables.” Smaller plants can adapt more easily to the grid, but they can also serve new customers, like providing energy directly to factories, steel mills or desalination plants.
Bringing those small plants into operation could be worth it, but it won’t be easy.”You can’t just excuse away the thing that’s at the center of all of it, which is it’s just a hard technology to build,” says Jaczko, the former NRC chair. “It’s difficult to make these plants, it’s difficult to design them, it’s difficult to engineer them, it’s difficult to construct them. At some point, that’s got to be the obvious conclusion to this technology.”
But the equally obvious conclusion is we can no longer live without it. “The reality is, you have to really squint to see how you get to net zero without nuclear,” says Third Way’s Freed. “There’s a lot of wishful thinking, a lot of fingers crossed.”
You may like
Business
Why financial institutions must prioritise contact data quality if serious about fraud prevention
Source: Finance Derivative
By Barley Laing, the UK Managing Director at Melissa
According to Nasdaq’s 2024 Global Financial Crime Report $3.1 trillion of illicit funds flowed through the global financial system in 2023.
As a result, it’s not surprising that most in financial services are investing heavily in advanced ID verification technology to protect themselves from fraud and meet Know Your Customer (KYC) and Anti-Money Laundering (AML) regulatory standards.
However, to bolster their ID verification efforts they need to do more, and the best way is by improving customer contact data quality from the outset.
Why is contact data quality so important?
From our experience the quality of contact data is key to the effectiveness of ID processes, influencing everything from end-to-end fraud prevention to delivering simple ID checks; meaning more advanced and costly techniques, like biometrics and liveness authentication, may not be necessary.
When a customer’s contact information, such as name, address, email and phone number are accurate the verification process becomes more reliable. With this data ID verification technology can confidently cross-reference the provided information against official databases or other authoritative sources without discrepancies that could lead to false positives or negatives.
A big issue is that fraudsters often exploit inaccuracies in contact data to create false identities and manipulate existing ones. By maintaining clean and accurate contact data ID verification systems can more effectively detect suspicious activity and prevent fraud. For example, discrepancies in a user’s phone or email, or an address linked to multiple identities, could serve as a red flag for additional scrutiny. This basic capability is more important than ever as identity fraud becomes increasingly sophisticated.
Address verification is the foundation of contact data quality
Address verification – having a consistently accurate, standardised address – is usually recognised as the cornerstone of contact data quality. Once you have access to up-to-date customer addresses it makes it much easier to match and verify identities across multiple sources.
Therefore, verifying the accuracy and legitimacy of an individual’s address should be the first step in any identity related process, with any discrepancies between a claimed address and official records highlighting a potential fraudster.
By catching these inconsistencies early ID verification technology can help mitigate risks, ensuring only legitimate users are granted access to services, protecting both their business and customers from fraud.
Address verification also plays an important role in regulatory compliance, by ensuring that the address information provided meets KYC and AML regulatory standards.
Phone and email verification
As I’ve already touched on it’s not all about having an accurate address, the role of phone and email verification is also vital as part of a comprehensive ID verification process, and therefore in preventing fraud. Particularly when it comes to helping organisations to identify and mitigate possible fraudulent activity early on. Verifying all three contact channels together contributes to enhanced security by filtering out fake or high-risk contact information, improving the accuracy of the ID verification process.
Email verification involves analysing various factors such as the age and history of the email address, the domain and syntax, and whether the email is temporary. After all, new and poorly formatted email addresses are often tell-tale signs of fraudsters. Furthermore, the association of a single email with multiple accounts could highlight criminal activity. It’s only by checking if an email address exists and works, then examining those elements I’ve already mentioned, that organisations can identify possible high-risk indicators.
Phone verification is equally important in fraud detection. By verifying the type and carrier of the phone number, organisations can identify high risk numbers, such as those associated with VoIP services, which are commonly used in fraudulent activities.
Checking the validity, activity and geolocation of a phone number also ensures it’s not only functional, but consistent with the user’s claimed location. And like with email, a single phone number linked to multiple accounts can indicate fraudulent behaviour.
Deliver contact data accuracy with autocomplete / lookup tools
The best way to obtain accurate customer contact data is to use autocomplete or lookup services.
With an address autocomplete tool it’s possible to deliver accurate address data in real-time by providing a properly formatted, correct address at the onboarding stage, when the user starts to input theirs. Tools such as these are very important because around 20 per cent of addresses entered online contain errors; these include spelling mistakes, wrong house numbers, and incorrect postcodes, as well as incorrect email addresses and phone numbers, typically due to errors when typing contact information. Another benefit of the service is the number of keystrokes required when entering an address is cut by up to 81 per cent. This speeds up the onboarding process and improves the whole experience.
Similar technology can be used to deliver first point of contact verification across email and phone, so these important contact datasets can also be verified in real-time.
In summary
The success of ID verification technology, and therefore fraud prevention, hinges on the accuracy and quality of customer contact data. Having such data not only enhances fraud detection, but improves the user experience and operational efficiency. Financial institutions must make sure that data verification tools are used across address, email and phone, alongside their ID verification technology.
Business
Fortifying Email Security Beyond Microsoft
By Oliver Paterson, Director of Product Management, VIPRE Security Group
Most organisations today are Microsoft software houses. Microsoft 365 is the go-to productivity suite, offering comprehensive tools, flexible licensing, and built-in security features. Employees live and breathe in Outlook, and so many different technologies seamlessly integrate with this indispensable communication tool to deliver productivity gains to business professionals.
However, email-borne cyber threats continue to surge. Malware delivered via email is exponentially increasing. .eml attachments, which often get overlooked in phishing emails, are growing. Cybercriminals are resorting to email scams, alongside phishing emails, and with the arrival of generative AI technologies, users are increasingly finding it challenging to spot these “expertly” written, persuasive emails too.
The reason for this growth in email-led attacks? Cybercriminals are exploiting the ubiquity of Microsoft – and indeed our trust in the software. It is no wonder that today Microsoft is the most spoofed URL.
Microsoft, a software powerhouse, but not an email specialist
Microsoft is undeniably a technology powerhouse, but its primary focus or specialty isn’t email security. Historically centered on infrastructure, operating systems, and cloud services, email security is a small part of its vast ecosystem. For example, while the company offers features like SafeLinks and SafeAttachments to protect against phishing scams, these are often limited to the priciest licenses. As a result, many organisations aren’t able to benefit from the depth of functionality that is needed for robust email protection.
The shortcomings of Microsoft’s security tiers
Microsoft offers a range of security packages for its Microsoft 365 and Office 365 suites, from E1 and E3 to the premium E5. While this tiered approach allows organisations to tailor licenses to employee roles, it also introduces vulnerabilities. Higher-tier subscriptions like E5 provide advanced security, but they’re costly. Lower-tier licenses often lack critical protections against impersonation and zero-day threats—gaps that cybercriminals eagerly exploit.
Furthermore, Microsoft’s user caps (e.g., 300 users on Business Premium) sometimes can lead organisations to make risky compromises in pursuit of cost savings. This mix-and-match strategy can result in blind spots, as lower-tier subscriptions typically lack advanced threat visibility tools, hampering investigation and response times.
Configuration conundrums
The Microsoft security portal, while comprehensive, is also complex. Take Link Protection (aka Microsoft SafeLinks) as an example. This feature needs enabling in multiple locations, and with Microsoft’s routine updates, these settings can be moved, altered, or even disabled by default. Such inadvertent misconfigurations not only pose security risks but also burden IT teams with constant vigilance and reconfiguration.
Static intelligence versus real-time threats
Microsoft’s reliance on third-party security feeds means its threat intelligence is often outdated. The company’s vast and complex platform requires time-consuming updates, and with email security being just one part of its portfolio, critical updates may not always be prioritised. A delay of even a day or two is all a zero-day attack needs to succeed.
A layered approach to email security
So what can organisations do? In an era where a single email can cripple a business, firms need to bolster Microsoft 365’s standard security. By understanding its limitations and layering on specialised protection, organisations can fortify their email defenses, with additional, advanced security capabilities, without breaking the bank. Due to the relentless onslaught of threat actors, such caution is essential.
Capabilities such as Link Isolation and Sandboxing are vital today to protect against zero-day threats. Link Isolation renders malicious URLs harmless, while Sandboxing automatically isolates suspicious files in a virtual environment for safe analysis. These methods provide real-time monitoring and intelligence, enabling proactive defense.
No matter how advanced technology gets, it alone can’t solve everything. User awareness is key, and “in-the-moment” training trumps the typical periodic sessions for cybersecurity education. When users are immediately informed why an email or attachment was blocked, along with the telltale signs of malice, the lesson is more likely to stick.
Many organisations, and especially the smaller and growing firms, can’t afford top-tier Microsoft licenses for all employees or indeed maintain in-house IT teams to address the gaps in security capabilities. Partnering with third-party security services providers across different aspects of the function is a viable option as no single software or platform can provide all the security techniques and capabilities. This approach is not only more cost-effective but also provides the technological expertise needed for protection in today’s rapidly evolving threat landscape. Reducing reliance on a single security provider is an astute approach to minimising business risk.
Business
The Impact of AI in the Fintech Industry: Enhancing the BNPL Experience
by Nada Ali Redha, Founder of PLIM Finance
Artificial Intelligence (AI) has transformed countless industries, and fintech is no exception. The evolution of AI technology is revolutionising how financial services operate, particularly in the Buy Now, Pay Later (BNPL) space. As the Founder and CEO of PLIM Finance—a BNPL service that specialises in the medical aesthetics industry—I have witnessed firsthand how AI can be leveraged to enhance both user experience and operational efficiency.
In the BNPL sector, AI and machine learning are essential tools for understanding and predicting consumer behaviour. BNPL providers often face the high-risk challenge of defaults, where consumers fail to make their scheduled payments. This is a critical issue for any BNPL provider, as defaults can impact the company’s profitability and reputation.
At PLIM Finance, we use AI-driven tools to manage defaults and failed payments. The power of AI in this context lies in its ability to learn from historical data and predict payment failures with remarkable accuracy. By analysing patterns in consumer spending, repayment behaviours, and other relevant factors, AI systems can forecast which payments are most likely to default. This predictive capability allows us to take proactive measures to manage and reduce defaults, safeguarding both our customers’ financial health and our own.
While we do not currently use AI to assess creditworthiness at PLIM Finance, AI’s potential in real-time risk assessment is unquestionable. Traditional credit assessment methods rely on static data, such as credit scores and income statements, which may not always reflect a consumer’s current financial situation. AI, however, can offer a more dynamic and holistic approach.
AI-driven systems can continuously analyse a variety of data sources, including transaction histories, spending patterns, and even social behaviours, to build a more comprehensive risk profile for each customer. This enables BNPL providers to make more informed lending decisions, tailoring financing options that align with each user’s ability to repay. Although PLIM has yet to implement AI in creditworthiness assessment, we recognise its potential to improve decision-making processes over traditional methods.
AI has a crucial role in combating fraud within the financial services sector, including BNPL platforms. Fraud detection is a multi-faceted challenge that requires constant vigilance and real-time analysis. AI is uniquely equipped to tackle this problem due to its capacity for processing vast amounts of data quickly and identifying suspicious patterns or anomalies that could indicate fraudulent activity.
At PLIM Finance, we leverage AI’s ability to apply collective data learning to make real-time decisions, thus reducing the likelihood of fraudulent activities going unnoticed. For instance, AI can detect unusual spending patterns or behaviours that deviate from a user’s normal financial activity, triggering alerts for further investigation. This proactive approach has proven to be highly effective in minimising financial losses and ensuring a safer environment for our users.
One of the most impactful benefits of AI in the BNPL space is the enhancement of customer engagement and satisfaction. AI allows companies to offer personalised, tailor-made services that resonate with each consumer’s specific needs. In the context of PLIM Finance, AI helps us recommend financing options based on individual preferences and past behaviours, streamlining the user’s journey.
Higher customer satisfaction often translates into increased loyalty and trust in the brand. By utilising AI to provide relevant recommendations and support, we can meet our customers where they are in their financial journey, helping them make informed decisions. This, in turn, creates a positive user experience that distinguishes our services from those of traditional lending institutions.
Despite its numerous benefits, implementing AI in BNPL services is not without challenges, especially concerning data privacy, algorithmic fairness, and transparency. One of the primary concerns in any AI application is bias in the data. AI systems learn from historical data, which may not be entirely representative of the diverse range of consumers who use BNPL services. Until we can source data from a wide variety of demographic and socioeconomic backgrounds, there is a risk that AI-driven decisions could inadvertently favour certain groups over others.
Transparency in AI decision-making is another ethical consideration. Customers need to trust that their data is being used responsibly and that AI algorithms are making fair, unbiased lending decisions. To address these concerns, it is crucial to maintain transparency about how AI models are built, what data they use, and how decisions are made. Additionally, complying with data privacy regulations, such as the General Data Protection Regulation (GDPR) in Europe, is essential to protect consumer rights.
AI’s role in the BNPL industry will continue to evolve as technology advances and more data becomes available. At PLIM Finance, we are excited about the future possibilities that AI presents, from more accurate risk assessment to enhancing customer satisfaction. By continuously improving our AI-driven tools and addressing the ethical challenges associated with their use, we aim to create a more inclusive, secure, and user-friendly BNPL experience.
In conclusion, the impact of AI in the fintech industry, particularly in the BNPL space, is profound. It offers solutions to key challenges, including managing defaults, fraud detection, and customer engagement, all while providing an opportunity to enhance the overall user experience. However, as we embrace these technological advancements, it is equally important to navigate the ethical concerns thoughtfully, ensuring that AI serves as a tool for positive financial inclusion.