Connect with us

Business

A journey into the heart of sustainable practices

By Rosemary Thomas, Senior Technical Researcher, AI Labs, Version 1

Artificial Intelligence is a transformative force that is reshaping our daily lives. It serves as an instrument of change, driving innovation across various sectors by automating tasks, providing insightful data analysis, and enabling new forms of interaction. AI is fostering a new era of efficiency, productivity, and creativity.

More importantly though, through transparency, ethical AI practices, and healthy privacy safeguards, AI can help to strengthen our trust in technology and its role in our daily lives. It is a catalyst for changing how society perceives sustainability, helping us predict and work towards a more sustainable, ethical future.

Making a difference with AI for good

‘AI for good’ pertains to the use of AI technologies to help solve specific societal challenges and contribute towards making people’s lives better. It leverages the strength of AI to address issues like economic hardship, physical and mental wellbeing, academic achievement, and the preservation of nature.

For businesses, ‘AI for good’ can mean using AI to contribute towards environmental, social, and governance (ESG). Used correctly, AI can help to create sustainable strategies, powering solutions that present a greater advantage to society. It can also help with ESG reporting, which has become a highly time-consuming process involving data collection, the use of multiple frameworks, rapidly changing disclosure requirements, the integration of different models, reporting, and data analysis. By adding AI capabilities into this process, businesses can streamline their operations, increase data accuracy, and increase confidence in stakeholder engagement.

A recent example of an ‘AI for good’ application is the TNMOC Mate designed for The National Museum of Computing. The app offers a different experience, tailored to each guest meaning neurodiverse and non-English speaking individuals, as well as young children, can engage with the museum exhibits equally. This is a prime example of AI being used to bring societal advantage, helping people regardless of their background or abilities to enjoy the museum experience as intended, by using generative AI to present complex exhibit information in a way that is easily understandable.

Improving sustainability with green AI

Green AI is another aspect of ‘AI for good’. It relates to eco-friendly artificial intelligence algorithms, models or systems that use less computational power and emit lower carbon. It holds significant importance, given that a call for a thorough review of sustainability has arisen since Large Language Models (LLMs) have been criticised for their large carbon footprints and energy usage.

One way of implementing Green AI, is leveraging AI systems for efficient inventory and resource management. Machine Learning models can analyse the performance data of equipment and devices, then use this data to help extend the lifespan of resources and ensure their optimal utilisation. They can also schedule updates, hardware upgrades and maintenance proactively, avoiding potential downtime. Furthermore, these models can detect abnormalities in system operations early, allowing organisations to conduct timely maintenance. This can help them save time and money, as well as reducing wastage.

AI models also play a crucial role in computing and energy efficiency. They can analyse and optimise energy consumption patterns, leading to significant improvements in operational efficiency.

Additionally, while LLMs can contribute to carbon emissions, they can also serve as a powerful tool in battling climate change. LLMs can expedite research and innovation processes while maintaining a focus on sustainability. By generating creative and diverse solutions, they can help organisations stay at the forefront of their industries, while keeping sustainability at the core of their operations.

Measure more than carbon footprint in AI metrics

It is no doubt important to measure carbon emissions during the training of models. It can prove crucial when considering regional differences, as this plays a key role in promoting sustainability. But given the wide range of energy efficiency measurements across different AI algorithms, it is essential to include additional energy metrics along with traditional performance indicators. Choosing cloud providers that prioritise eco-friendliness is recommended, as well as strategically selecting the locations of data centres; the ultimate aim should be to foster the creation of AI solutions that are not only energy-efficient, but also environmentally friendly.

There is a call to standardise energy and carbon data reporting, which has been seen as a step towards encouraging social responsibility in the field of AI research and development. However, reporting cannot be done without accurate calculations, and carbon measurement is still in its early stages. When calculating the carbon footprint of a model, we should consider all variables equally, not just the final value of carbon. This is fundamental because, without this knowledge, we are ill-equipped to manage or improve it.

Fortunately, there are organisations working to solve this challenge. For example, The Green Software Foundation (GSF) is a non-profit organisation that aims to create a trusted ecosystem of people, standards, and best practices for developing green software and AI. The GSF have various tools and methods to help us measure and reduce the environmental impact such as the ‘Impact Framework’,‘Software Carbon Intensity’ (SCI) specification, and the Green Software Maturity Matrix[1].

Inclusion and diversity in the ethical use of AI

Safeguarding ethical use involves laying the groundwork for ethical standards, tackling biases in AI systems, prioritising transparency and explainability, and protecting against privacy concerns. The impact on human autonomy and responsibility gaps must also be contemplated, along with calculating the financial and environmental costs of training deep learning models.

There are implications arising from both responsible and irresponsible AI deployment, and it is important to illustrate examples of both sides in AI applications. In healthcare, for example, AI systems are used to assist medical professionals in transparent diagnosis and accountable treatment planning. This boosts patient care, promotes fair and informed decision-making, and contributes to better health outcomes.

In human resources, AI can be used for unbiased staffing processes. It moderates human biases, elevates inclusion and diversity, and promotes evenly balanced opportunities for all candidates.

Finally, in environmental monitoring, AI is used for the transparent monitoring and managing of eco-friendly dynamics, such as air and water quality, using sensors, transmitters, and data analytics. This helps to care for the environment, protect ecosystems and support the well-being of groups by addressing environmental hazards.

The non-ethical use of AI is more prevalent in surveillance systems, especially with facial recognition deployed in public spaces. This technology is used for mass surveillance, tracing individuals without their consent, and disregarding privacy rights, and in the US in particular this can be easily misused. AI tools can also be used in the creation of deepfakes to create dangerous misinformation.

Additionally, if the training data consists of historical biases, AI systems can spread and increase prejudice – resulting in unjust treatment which can excessively impact certain demographic communities. Finally, social engineering attacks using AI systems can be much more difficult to detect, and prompt injection attacks and LLM poisoning can intentionally cause harm and malice for a larger population.

Ethical, sustainable AI

As we collectively strive towards a sustainable future, AI is emerging as a key driving force. It is steering us towards solutions that are not only economically viable, but also environmentally sound and socially responsible.

Organisations should start to leverage sustainable AI, making sure that these technologies are having a positive impact of the ESG commitments, while ensuring they are created and used in a way that is ethical, fair, and transparent. In this journey, every algorithm we design, every model we train, and every AI-powered solution we deploy can take us one step closer to our goal of sustainability.


[1] https://medium.com/version-1/what-really-matters-for-green-calculations-a-practical-perspective-0bc0f5c7540c

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Empowering banks to protect consumers: The impact of the APP Fraud mandate

Source: Finance Derivative

Thara Brooks, Market Specialist, Fraud, Financial Crime & Compliance at FIS

On the 7th October last year, the APP (Authorised Push Payment) fraud reimbursement mandate came into effect in the UK. The mandate aims to protect consumers, but it has already come under immense scrutiny, receiving both support and criticism from all market sectors. But what does it mean for banks and their customers?

Fraud has become a growing concern for the UK banking system and its consumers. According to the ICAEW, the total value of UK fraud stood at £2.3bn in 2023, a 104% increase since 2022, with estimates that the evolution of AI will lead to even bigger challenges. As the IMF points out, greater digitalisation brings greater vulnerabilities, at a time when half of UK consumers are already “obsessed” with checking their banking apps and balances.

These concerns have contributed to the implementation of the PSR’s (Payment Systems Regulator) APP fraud mandate, which was implemented to reimburse the victims of APP fraud. APP fraud occurs when somebody is tricked into authorising a payment from their own bank account. Unlike more traditional fraud, such as payments made from a stolen bank card, APP fraud previously fell outside the scope of conventional fraud protection, as the transaction is technically “authorised” by the victim.

The £85,000 Debate: A controversial adjustment

The regulatory framework for the APP fraud mandate was initially introduced in May 2022. The maximum level of mandatory reimbursement was originally set at £415,000 per claim. The PSR significantly reduced the maximum reimbursement value to £85,000 when the mandate came into effect, however, causing widespread controversy.

According to the PSR, the updated cap will see over 99% of claims (by volume) being covered, with an October review highlighting just 18 instances of people being scammed for more than £415,000, and 411 instances of more than £85,000, from a total of over 250,000 cases throughout 2023. “Almost all high value scams are made up of multiple smaller transactions,” the PSR explains, “reducing the effectiveness of transaction limits as a tool to manage exposure.”

The reduced cap makes a big difference on multiple levels. For financial institutions and payment service providers (PSPs), the lower limit means they’re less exposed to high-value claims. The reduced exposure to unlimited high-value claims has the potential to lower compliance and operational costs, while the £85,000 cap aligns with the Financial Services Compensation Scheme (FSCS) threshold, creating broader consistency across financial redress schemes.

There are naturally downsides to the lower limit, with critics highlighting significant financial shortfalls for victims of high-value fraud. The lower cap may reduce public confidence in the financial system’s ability to protect against fraud, particularly for those handling large sums of money, while small businesses, many of which often deal with large transaction amounts, may find the cap insufficient to cover losses.

The impact on PSPs and their customers

With PSPs responsible for APP fraud reimbursement, institutions need to take the next step when it comes to fraud detection and prevention to minimise exposure to claims within the £85,000 cap. Customers of all types are likely to benefit from more robust security as a result.

The Financial Conduct Authority’s (FCA’s) recommendations include strengthening controls during onboarding, improving transaction monitoring to detect suspicious activity, and optimising reporting mechanisms to enable swift action. Such controls are largely in line with the PSR’s own recommendations, with the institution setting out a number of steps in its final policy statement in December 2023 to mitigate APP scam risks.

These include setting appropriate transaction limits, improving ‘know your customer’ controls, strengthening transaction-monitoring systems and stopping or freezing payments that PSPs consider to be suspicious for further investigation.

All these measures will invariably improve consumer experience, increasing customers’ confidence to transact online safely, as well as giving them peace of mind with quicker reimbursement in case things go awry.

Going beyond the APP fraud mandate

If the PSR’s mandate can steer financial institutions towards implementing more robust security practices, it can only be a good thing. It’s not the only tool that’s shaping the financial security landscape, however.

In October 2024, the UK government introduced new legislation granting banks enhanced powers to combat fraud. An optional £100 excess on fraud claims has been introduced to encourage customer caution and combat moral hazards, while the Treasury has strengthened prevention measures by handing out new powers to high street banks to delay and investigate payments suspected of being fraudulent by 3 days. The extended processing time for suspicious payments may lead to delays in legitimate transactions, making transparent communication and robust safeguards essential to maintain consumer trust.

Further collaborative efforts, such as Meta’s partnership with UK banks through the Fraud Intelligence Reciprocal Exchange (FIRE) program, can also aid the fight against fraud. Thanks to direct intelligence sharing between financial institutions and the world’s biggest social media platform, FIRE enhances the detection and removal of fraudulent accounts across platforms such as Facebook and Instagram, not only disrupting scam operations, but also fostering a safer digital environment for users. The early stages of the pilot have led to action against thousands of scammer-operated accounts, with approximately 20,000 accounts removed based on shared data.

Additionally, education and awareness are crucial measures to protect consumers against APP fraud. Several high street banks have upgraded their banking channels to share timely content about the signs of potential scams, with increased public awareness helping consumers identify and avoid fraudulent schemes.

Improvements in policing strategies are also significantly contributing to the mitigation of APP fraud. Specialized fraud units within police forces have enhanced the precision and efficiency of investigations. The City of London Police and the National Fraud Intelligence Bureau are upgrading the technology for Action Fraud, providing victims with a more accessible and customer-friendly service. Collaborative efforts among police, banks, and telecommunications firms, exemplified by the work of the Dedicated Card and Payment Crime Unit (DCPCU), have enabled the swift exchange of information, facilitating the prompt apprehension of scammers.

How AI is expected to change the landscape

The coming months will be critical in assessing these changes, as institutions, businesses and the UK government work together to shape security against fraud in the ever-changing world of finance.

While fraud is a terrifyingly big business, it’s only likely to increase with the evolution of AI, making it even more critical that such changes are effective. According to PwC, “There is a real risk that hard-fought improvements in fraud defences could be undone if the right measures are not put in place to defend against fraud in an AI-enabled world.”

Chatbots can be used as part of phishing scams, for example, and AI systems can already read text and reproduce sampled voices, making it possible to send messages from “relatives” whose voices have been spoofed in a similar manner to deepfakes.

Along with other innovations, tools and collaborations, however, the APP fraud mandate, UK legislation and FIRE can all contribute towards redressing such technological advances. Together, this can give financial institutions a much-needed boost in the fight against fraud, providing a more secure future for customers.

Continue Reading

Business

After the tax deadline: Next steps for accountancy firms

Source: Finance Derivative

By Cameron Ford, UK General Manager of Silverfin

For many accountancy firms, tax season has ended. Now, leaders have a chance to reflect on their firm’s performance, how their people are feeling after the busiest period of the year, and consider how they might optimise people, processes and technology for the future.

As a former CFO with experience in senior accountancy roles across multiple firms, I know first-hand the challenges the year-end crunch presents. The intense weeks and months leading up to HMRC deadlines put immense pressure on infrastructure, exposing the limitations of legacy systems and the bottlenecks caused by manual workflows.

The post-busy-season presents a valuable opportunity to reassess and prepare for the next one. It’s also a time for firms to reflect on evolving client needs and proactively take action to deliver improved future outcomes. Firms should also evaluate whether their current technology is alleviating pressure during peak periods – or adding to the strain.

The risk of inaction

We are living in an era of profound technological change and fast-paced innovation. Firms that fail to evolve with the times will be left behind as more flexible and adaptive competitors race forward. The risk for slow movers is not just reduced competitiveness – its industry consolidation locking them out altogether.

For today’s leaders, the choice is no longer whether to transform – but which technologies to adopt. Accountancy firms now have access to an extensive array of powerful solutions. Data analytics tools are delivering insights to power better decision-making. Automation is streamlining workflows, reducing errors and freeing up valuable time to focus on strategic tasks. And the demand for fast, secure access to accurate and timely data is only growing.

Yet, as accountancy technology matures, new challenges are emerging that extend beyond traditional tech solutions as regulators become increasingly zealous. In the UK alone, two-thirds of current business taxes were introduced in the past decade, according to Thomson Reuters. That’s 13 out of 19 business taxes. The sheer pace of regulatory innovation demonstrates the need for accountancy firms to be agile and capable of transforming at speed, as their clients face an ever evolving and intricate tax landscape.

Future success depends on equipping firms with the ability to meet the demands of both customers and regulators, striking a balance that not only satisfies current expectations but also lays the groundwork for evolving future requirements.

Growing complexity

Corporate tax management illustrates the complex nature of today’s accounting landscape. Changing regulations, new post-Brexit tax requirements and global initiatives – such as the Organisation for Economic Cooperation and Development’s (OECD) Pillar Two, which introduces a global minimum corporate tax rate of 15% – are placing unprecedented demands on tax and accounting professionals.

The most effective response is to adopt specialised software that is designed to manage compliance and evolving regulatory requirements. While adopting new technology can seem daunting, it should be seen as an opportunity, not an obstacle. Yes, there may be initial friction and deployment challenges during the early stages of transformation, but these are temporary. As firms adapt to new tools and workflows, they unlock significant benefits – including streamlined processes, improved accuracy, and the ability to stay ahead of future changes in an increasingly dynamic tax environment.

AI transformation 

AI is rapidly emerging as a game-changing technology for many industries, including accountancy. It’s true value lies in acting as a partner and collaborator, taking on the heavy lifting of repetitive manual tasks, freeing up valuable hours so accountants have more time to focus on building stronger client relationships.

To be effective, AI relies on accurate real-time financial data that is easily accessed and stored in a standardised format. But before even considering training a model, firms must solve their lingering data issues. With multiple bookkeeping and large volumes of inconsistent and duplicated data, firms often struggle to extract meaningful insights.

Resolving these issues requires integrating data from various bookkeeping systems using techniques such as cloud syncs and AI enrichment tools. Data must also be stored in a unified format, properly catalogued and free from duplication to maximise its value.

By deploying AI on a foundation of clean, reliable and up-to-date data, accountancy firms can enhance their performance during peak seasons and better manage the pressures of increased demand. Plus, digital transformation and the deployment of advanced accountancy and compliance software also put firms in a stronger position to respond to new complexities and challenges that will inevitably emerge in this dynamic marketplace.

Peak season may be over, but now it’s time to plan for the next one, anticipating customer needs and proactively adapting to shifting demands.

Continue Reading

Business

Future-proofing financial services investment

Source: Finance Derivative

Adrian Ah-Chin-Kow, Global Commercial Director at leading software escrow company, Escode, discusses how the financial services sector can prepare for the increasing investment ahead of the government’s industrial growth strategy, Invest 2035, ensuring resilience against technological risks.

The UK’s proposed Invest 2035 strategy sets a bold vision: to elevate the UK as a global leader in high-growth sectors. Financial services are at the heart of this roadmap, tasked with driving innovation, sustainability, and competitiveness. But as we look towards the future, it’s critical that the sector strikes a careful balance between embracing strategic investments and maintaining operational resilience in the face of an increasingly complex technological risk landscape.

The digital transformation currently underway in financial services is set to accelerate even further as organisations adopt new technologies like artificial intelligence, blockchain, and cloud computing. These innovations hold immense potential for growth and efficiency, but they also introduce new layers of vulnerability. For financial services to thrive in this environment, firms need to ensure their technology infrastructure is resilient, reliable, and capable of withstanding disruption.

Growing risks in a digital-first world
As government and industry push forward with initiatives to digitise the financial services ecosystem, the sector is becoming more dependent on technology than ever before. With this reliance comes the inevitable rise of new risks—risks that can threaten operations, customer trust, and even the stability of markets.

We’ve seen first-hand the consequences of technology disruptions in this space. When key software providers experience outages or security breaches, the ripple effect can be significant, disrupting not just the companies involved but entire networks of financial institutions that depend on those systems. The impacts of such disruptions, particularly in a sector where reliability is paramount, can extend beyond the immediate downtime, eroding investor confidence and creating long-term reputational damage.

In a world that is becoming more interconnected by the day, it’s crucial that financial services organisations are prepared for these challenges. Protecting against technology failures and ensuring business continuity must be top priorities for any firm that wants to remain competitive in the years to come.

Operational resilience: The foundation of future growth
The ability to withstand and recover from disruption is at the core of what will define successful financial services firms in the future. Operational resilience is no longer just a regulatory requirement—it’s a business imperative that builds trust with investors, customers, and stakeholders. The strategies needed to build this resilience are varied, but there are a few critical components every organisation should consider.

  • Software Escrow: As financial institutions increasingly depend on digital tools, software escrow becomes a fundamental safeguard. We know how crucial escrow agreements are for protecting access to essential tools. If a provider fails or encounters insolvency, escrow ensures that critical software and intellectual property (IP) are held securely by a third party, ready to be released to the firm. In a sector where continuous access to technology is crucial, this arrangement offers peace of mind, ensuring core operations are protected from unexpected interruptions.
  • Stress-testing and Business Continuity: Regular stress-testing and comprehensive business continuity plans are essential components of any resilience strategy. By simulating disruptions, firms can identify weaknesses in their operations and put in place measures to address them. Continuity planning ensures that businesses can continue to operate, even under extreme circumstances, helping to mitigate the impacts of unanticipated events and minimise disruption to clients and markets.
  • Collaborative Resilience Standards: The interconnectivity of today’s financial ecosystem demands industry-wide standards. We’ve seen collaboration across both the private sector and with government initiatives become increasingly important. The UK’s Invest 2035 strategy offers an excellent foundation for fostering these partnerships, helping to establish resilience as a shared priority across the sector. We’re already seeing frameworks like the EU’s Digital Operational Resilience Act (DORA) lead the way in embedding resilience into the financial services supply chain. This kind of regulatory guidance helps institutions understand how to manage risks effectively, reducing overreliance on third-party providers and ensuring that firms can respond quickly to disruptions.

Collectively, these strategies reinforce the importance of being proactive rather than reactive when it comes to risk management. Operational resilience isn’t just about surviving the next crisis—it’s about building a foundation for long-term stability and growth in a rapidly changing environment.

Resilience as the key to securing Invest 2035
As we move towards Invest 2035, operational resilience will be the cornerstone of success. The financial services sector plays a pivotal role in driving economic growth and innovation, and its ability to adapt and respond to disruption will be key to maintaining the UK’s competitiveness on the global stage.

Embracing proactive resilience measures is the key to future success. By incorporating solutions like software escrow, stress-testing, and government-backed collaboration into their operational strategies, financial institutions can secure the UK’s position as a competitive, reliable investment hub.

Looking to the future, the ability to navigate these risks while maintaining operational integrity will determine whether financial services can continue to be the engine of economic growth in the UK. With the right safeguards in place, the sector can not only meet the goals of Invest 2035 but also build a reputation as a safe and dependable destination for global investment.

Continue Reading

Copyright © 2021 Futures Parity.