Source: Finance Derivative
Smart Legal Contracts – not a smart direction
Does the arrival of smart legal contracts presage the “Susskindesque end of lawyers”, genuinely creating the scenario of “Code Is Law”? Of course not, says Akber Datoo, CEO, D2 Legal Technology, rather, it creates both opportunities and challenges as part of the broader digital agenda.
As both a lawyer and a computer scientist, Akber explains why smart legal contracts increase, not decrease, the need for both the law and lawyers – and calls for legal experts to rapidly extend their skill set to embrace technology and data.
There are no specific barriers in English Law to the adoption of smart legal contracts. Defined by the Law Commission as: “A legally binding contract in which some or all of the contractual terms are defined in and/or performed automatically by a computer program”, their work in this area published on 25 November 20211, states: “We have concluded that the current legal framework is clearly able to facilitate and support the use of smart legal contracts. Current legal principles can apply to smart legal contracts in much the same way as they do to traditional contracts, albeit with an incremental and principled development of the common law in specific contexts. In general, difficulties associated with applying the existing law to smart legal contracts are not unique to them, and could equally arise in the context of traditional contracts.”
Clearly, the definition and this statement is just the beginning of the journey, seeking to encourage innovation to fulfil the commercial promise of the Smart Legal Contract – which is compelling. It is expected to revolutionise business over the next decade. Its adoption is, however, not without its challenges – as arguably, is any significant evolution in any field.
The Solution to Trust is not just the Immutable Contract
As soon as a smart legal contract is placed on a Distributed Ledger Technology (DLT), the agreed automation is unchangeable. On the one hand, this is the very attraction – removing the need for trust between both parties – the trust is placed in the code. But that also means any failure in that code cannot be amended. Essentially, while the smart legal contract is not immune to legal intervention and other forms of governance, resolving a problem is extremely difficult.
For example, what happens if it turns out the smart legal contract was illegal? If there was fraud involved? If someone made a coding error? Or simply that circumstances have changed? The automation cannot be stopped. Even if the courts might rule that it should stop, the technology cannot be halted. The only option will be to set up some form of reverse transaction to make the adjustment. Far from an ideal situation.
The use of DLTs for smart legal contracts highlights a severe lack of ‘after the event protection’. Traditional contracts encourage the growth of trust during relationships between the parties, especially relationship level agreements such as the ISDA Master Agreement (“famously referred to by J Briggs in the Court of Appeal as ‘[…] probably the most important standard market agreement used in the financial world”)2. They include flexible tools, such as the use of elastic and flexible terms such as ‘acting reasonably’ and ‘good faith’. In addition to this, there is the ability to seek mutually acceptable outcomes should the truly unexpected occur to the surprise of the parties, through mediation, arbitration – or the backstop of the courts themselves.
These are not concepts that can be applied to the purist “code is law” philosophy that underpins some views of how smart legal contracts ought to evolve. This results in a language of automation that is restrictive to business and the code unstoppable. Yes, we require a degree of immutability and automation – but the law is king over code, and smart legal contracts need to be designed to allow the law to intervene if we are going to allow the use of smart legal contracts for serious commercial transactions.
To make smart legal contracts work correctly given their immutability and automated nature, both parties need to know – or attempt to know – every possible event that may happen in the future which is, of course, impractical for most reasonably complex business transactions. Who has the expertise to ensure that every contingency (including mandatory actions ordered by a court of law) are considered and agreed between the parties (in the code)? The truth is, even in order to imagine and provide for some of those scenarios if we are going to empower the code, the role of the lawyer will become more important than ever.
A large part of a lawyer’s job is to tease out the needs and desires of a client, smoothing out contradictions and flagging potential eventualities. Programming a smart legal contract is tantamount to translating those intentions into code – which is great, if both parties are in control of that code. Yet the model being proposed by many in the industry is for lawyers to design the smart legal contract as usual and then hand it over to a developer to draft the code.
Traditional contract interpretation is hard enough. This new world merely exacerbates the difficulties. Does the coder truly understand the law effect being sought by the lawyer? Does the lawyer truly understand the operation of the code being put in place by the developers? Any mistakes, any errors or misinterpretations will have a significant and severe impact because the smart legal contract is (as suggested above), largely immutable. It is now vital for lawyers to understand code and operate in this digital sphere. Lawyers need to be able to test a software program, just as they test scenarios anticipated by a contractual clause today. The difference will be rather than using natural language prose, the testing will be done through the context of high-level programming code – debugging through the code (that does look like natural language – yes, Solidity, Rust, Vyper and other smart legal contract code is not practically written in 1s and 0s, rather resembles natural language by design!).
Smart legal contracts are a long way from reaching maturity. There are many issues to address. As noted above, a degree of reversibility will have to be created, otherwise there will be a finite limit on the potential complexity and value of these automated agreements. But the shift is hugely exciting and offers enormous potential to the industry – if the right steps are taken.
Calls from some quarters for smart legal contracts to be based upon natural language so that they can be understood by judges will place a serious limit on the extent to which they can be deployed. Challenges around the management of complexity will constrain the use of automation with any degree of sophistication. It will also create the risk that firms could be sued for negligence due to mistakes in the coding phase leading to contracts failing to achieve the goals of both parties. Calling for natural language and translation is a short-sighted approach and one that will not only delay the inevitable increasing adoption of automation but also add complex layers of failure.
Smart legal contracts offer a great deal of promise. However, there is a huge amount to be done by the legal community to embrace, explore and understand if that promise is to be realised. Not only will skills and toolkits need to change, but lawyers must play an imperative role in understanding the true limits of automation and determining where good, old fashioned human judgement remains king. The onus is now on lawyers to take ownership of smart legal contracts, discover and embrace new skills and gain the confidence required to accelerate maturity – without that committed smart legal contracts will, at best, fail to deliver on their promise and, at worst, create a global legal mire that could take generations to unpick.
Embedded Finance: The Opportunity Ahead
By Eduardo Martinez Garcia, CEO & Co-founder of Toqio
The current financial landscape is undergoing a significant transformation, disrupting the long-established dominion of major banks and other large financial institutions. Embedded finance, a concept that has thrived in the realm of digital consumer products, is now steadily infiltrating the corporate domain, poised to revolutionize the financial sector further.
This paradigm shift is manifesting in a multitude of ways, with digital embedded finance increasingly becoming an integral part of corporate digital offerings. Distributor payment processing, lending services for suppliers, and supply chain financing are all becoming commonplace – the versatility of corporate embedded finance knows no bounds. Despite the diverse applications the core objectives remain consistent, including enhancing B2B processes, mitigating risks, and fortifying business relationships.
Corporate embedded finance promises to deliver substantial value over the course of the next decade. A burgeoning opportunity beckons, estimated to be worth an astonishing USD 3.7 trillion over the next five years alone. Remarkably, more than 50% of businesses have expressed a preference for cash flow financing through platforms rather than traditional banks, as per a report by McKinsey. The shift observed in consumer embedded finance adoption is creeping into the B2B landscape, and moving more quickly all the time. Consequently, if the high level of adoption of consumer embedded finance carries over into the B2B space, and it’s certainly expected to, we’re genuinely looking at the next big thing.
Customers are no longer passive passengers in their financial journeys; they have emerged as the navigators, steering the industry’s course while financial institutions focus on risk management. Banks and non-banking financial institutions (NBFIs) remain pivotal, but their control of products is waning. Companies, intimately acquainted with their customers and partners, possess a deeper understanding of their collaborative ecosystem. Consequently, they are better equipped to tailor their financial offerings to meet the needs of their business relationships.
Take Amazon, for instance, which has been offering loans to small businesses operating on its platform for years. Amazon evaluates risk based on a merchant’s payment history, sales volume, projected revenue, and other critical data points. This approach enables Amazon to provide additional value to its sellers while securing a foothold in the financing market. The close rapport Amazon shares with its small business partners positions it with substantially less risk compared to conventional banks.
Shopify has also ingeniously woven embedded finance into the very fabric of its offering. While its core service revolves around delivering an efficient, subscription-based e-commerce platform, it also provides payment processing and lending services, among a myriad of other financial solutions. Shopify boasts an extensive reservoir of data, allowing it to make informed decisions about the financial products it can offer to merchants, all while keeping risk to a minimum.
Historically, financial products have fallen within the purview of major corporations either through partnerships with third parties or in-house service creation. Nevertheless, the rise of digital channels has expedited the decentralization of financial services, and it’s snowballing. Companies spanning various industries, from automakers to retail giants, are recognizing the immense untapped potential in taking control of many functions traditionally handled by financial institutions. While financial institutions will endure, their role is evolving. Their strengths are assessment, management, and specialized services. They must pivot towards analyzing data from a multitude of sources, diving into data lakes to provide genuinely useful risk assessments.
Incumbent banks have demonstrated their staying power and adaptability time and time again, mostly due to being able to leverage their size and relative dependability. They’ve capitalized on their vast customer bases, regulatory compliance expertise, and extensive branch networks to maintain a competitive edge. Additionally, incumbent banks have finally begun to recognize the need to adapt to changing customer expectations and digital transformation.
The future of core banking is likely to strike a balance between fintech disruptors and established incumbents. Collaboration and partnerships between incumbents and fintech startups tend to drive innovation, offering customers cutting-edge digital experiences. Big banks are probably going to find their place in the market modified, and not necessarily in a bad way.
Incumbents and financial behemoths have long been oriented toward long-term financial products, such as 30-year mortgages. But what about short-term business loans? Consider the restaurateur seeking a swift three-month loan to renovate a kitchen or the farmer unable to repay a loan until the crops are harvested and sold, a process spanning six months or more. For traditional banks, these scenarios represent short-term debts, a situation they tend to avoid. This presents a prime opportunity for companies to tailor products that cater to these specific needs, allowing them to define the space.
The evolution of embedded finance is commencing with payments, as it represents one of the least regulated segments in finance, offering ample room for innovation. Credit, closely trailing payments in significance, holds paramount importance. What’s really exciting is that as corporate giants blaze the trail, they pave the way for others to follow suit. This means that small and medium-sized enterprises will also be able to get involved, making embedded finance more inclusive within a given business ecosystem.
Eduardo Martinez Garcia is the CEO & Co-Founder of Toqio. He is an avid entrepreneur who has set up and run successful global ventures in the UK, Spain, and South Africa over the course of the last 20 years.
Hype, Hysteria & Hope: AI’s Evolutionary Journey and What it Means for Financial Services
Source: Finance Derivative
Written by Gabriel Hopkins, Chief Product Officer at Ripjar
Almost a year to the day since ChatGPT launched, the hype, hysteria, and hope around the technology shows little signs of abating. In recent weeks OpenAI chief Sam Altman was removed from his position, only to return some days later. Rishi Sunak hosted world leaders at the UK’s AI Safety Summit, interviewing the likes of Elon Musk in front of an assembly of world leaders and tech entrepreneurs. While behind the scenes, AI researchers are rumoured to be close to even more breakthroughs within weeks.
What does it all mean for those industries that want to benefit from AI but are unsure of the risks?
It’s possible that some forms of machine learning – what we used to call AI – have been around for a century. Since the early 1990s, those tools have been a key operational element of some banking, government, and corporate processes, while being notably absent from others.
So why the uneven adoption? Generally, that has been related to risk. For instance, AI tools are great for tasks like fraud detection. It’s a well-established that an algorithm can do things that analysts simply can’t by reviewing vast swathes of data in milliseconds. And that has become the norm, particularly because it is not essential to understand each and every decision in detail.
Other processes have been more resistant to change. Usually, that’s not because an algorithm couldn’t do better, but rather because – in areas such as credit scoring or money laundering detection – the potential for unexpected biases to creep in is unacceptable. That is particularly acute in credit scoring when a loan or mortgage can be declined due to non-financial characteristics.
While the adoption of older AI techniques has been progressing year after year, the arrival of Generative AI, characterised by ChatGPT, has changed everything. The potential for the new models – both good and bad – is huge, and commentary has divided accordingly. What is clear is that no organisation wants to miss out on the upside. Despite the talk about Generative and Frontier models, 2023 has been brimming with excitement about the revolution ahead.
A primary use case for AI in the financial crime space is to detect and prevent fraudulent and criminal activity. Efforts are generally concentrated around two similar but different objectives. These are thwarting fraudulent activity – stopping you or your relative from getting defrauded – and adhering to existing regulatory guidelines to support anti-money laundering (AML), and combatting the financing of terrorism (CFT).
Historically, AI deployment in the AML and CFT areas has faced concerns about potentially overlooking critical instances compared to traditional rule-based methods. Within the past decade, and other regulators initiated a shift by encouraging innovation to help with AML and CFT cases. Despite the use of machine learning models in fraud prevention over the past decades, adoption in AML/CFT has been much slower with a prevalence for headlines and predications over actual action. The advent of Generative AI looks likely to change that equation dramatically.
One bright spot for AI in compliance over the last 5 years, has been in customer and counterparty screening, particularly when it comes to the vast quantities of data involved in high-quality Adverse Media (aka Negative News) screening where organisations look for the early signs of risk in the news media to protect themselves from potential issues.
The nature of high-volume screening against billions of unstructured documents has meant that the advantages of machine learning and artificial intelligence far outweigh the risks and enable organisations to undertake checks which would simply not be possible otherwise.
Now banks and other organisations want to go a stage further. As Generation AI models start to approach AGI (Artificial General Intelligence) where they can routinely outperform human analysts, the question is when, and not if, they can use the technology to better support decisions and potentially even make the decisions unilaterally.
AI Safety in Compliance
The 2023 AI Safety Summit was a significant milestone in acknowledging the importance of AI. The Summit resulted in 28 countries signing a declaration to continue meetings to address AI risks. The event led to the inauguration of the AI Safety Institute, which will contribute to future research and collaboration to ensure its safety.
Though there are advantages to having an international focus on the AI conversation, the GPT transformer models were the primary focus areas during the Summit. This poses a risk of oversimplifying or confusing the broader AI spectrum for unaccustomed individuals. There is a broad range of AI technologies with hugely varying characteristics. Regulators and others need to understand that complexity. Banks, government agencies, and global companies must exert a thoughtful approach to AI utilisation. They must emphasise its safe, careful, and explainable use when leveraged inside and outside of compliance frameworks.
The Road Ahead
The compliance landscape demands a review of standards for responsible AI use. It is essential to establish best practices and clear objectives to help steer organisations away from hastily assembled AI solutions that compromise accuracy. Accuracy, reliability, and innovation are equally important to mitigate fabrication or potential misinformation.
Within the banking sector, AI is being used to support compliance analysts already struggling with time constraints and growing regulatory responsibilities. AI can significantly aid teams by automating mundane tasks, augmenting decision-making processes, and enhancing fraud detection.
The UK can benefit from the latest opportunity. We should cultivate an innovation ecosystem with is receptive to AI innovation across fintech, regtech, and beyond. Clarity from government and thought leaders on AI tailored to practical implementations in the industry is key. We must also be open to welcoming new graduates from the growing global talent pool for AI to fortify the country’s position in pioneering AI-driven solutions and integrating them seamlessly. Amid industry change, prioritising and backing responsible AI deployment is crucial for the successful ongoing battle against all aspects of financial crime.
Using AI to support positive outcomes in alternative provision
By Fleur Sexton
Fleur Sexton, Deputy Lieutenant West Midlands and CEO of dynamic training provider, PET-Xi, with a reputation for success with the hardest to reach,
discusses using AI to support excluded pupils in alternative provision (AP)
Exclusion from school is often life-changing for the majority of vulnerable and disadvantaged young people who enter alternative provision (AP). Many face a bleak future, with just 4% of excluded pupils achieving a pass in English and maths GCSEs, and 50% becoming ‘not in education, employment or training’ (NEET) post-16.
Often labelled ‘the pipeline to prison’, statistics gathered from prison inmates are undeniably convincing: 42% of prisoners were expelled or permanently excluded from school; 59% truanted; 47% of those entering prison have no school qualifications. With a prison service already in crisis, providing children with the ‘right support, right place, right time’, is not just an ethical response, it makes sound financial sense. Let’s invest in education, rather than incarceration.
‘Persistent disruptive behaviour’ – the most commonly cited reason for temporary or permanent exclusion from mainstream education – often results from unmet or undiagnosed special educational needs (SEN) or social, emotional and mental health (SEMH) needs. These pupils find themselves unable to cope in a mainstream environment, which impacts their mental health and personal wellbeing, and their abilities to engage in a positive way with the curriculum and the challenges of school routine. A multitude of factors all adding to their feelings of frustration and failure.
Between 2021/22 and 2022/23, councils across the country recorded a 61% rise in school exclusions, with overall exclusion figures rising by 50% compared to 2018/19. The latest statistics from the Department for Education (DfE), show pupils with autism in England are nearly three times as likely to be suspended than their neurotypical peers. With 82% of young people in state-funded alternative provision (AP) with identified special educational needs (SEN) and social emotional and mental health (SEMH) needs, for many it is their last chance of gaining an education that is every child’s right.
The Department for Education’s (DfE) SEND and AP Improvement Plan (March 2023).reported, ‘82% of children and young people in state-place funded alternative provision have identified special educational needs (SEN) 2, and it (AP) is increasingly being used to supplement local SEND systems…’
Some pupils on waiting lists for AP placements have access to online lessons or tutors, others are simply at home, and not receiving an education. In oversubscribed AP settings, class sizes have had to be increased to accommodate demand, raising the pupil:teacher ratio, and decreasing the levels of support individuals receive. Other unregulated settings provide questionable educational advantage to attendees.
AI can help redress the balance and help provide effective AP. The first challenge for teachers in AP is to engage these young people back into learning. If the content of the curriculum used holds no relevance for a child already struggling to learn, the task becomes even more difficult. As adults we rarely engage with subjects that do not hold our interest – but often expect children to do so.
Using context that pupils recognise and relate to – making learning integral to the real world and more specifically, to their reality, provides a way in. A persuasive essay about school uniforms, may fire the debate for a successful learner, but it is probably not going to be a hot topic for a child struggling with a chaotic or dysfunctional home life. If that child is dealing with high levels of adversity – being a carer for a relative, keeping the household going, dealing with pressure to join local gangs, being coerced into couriering drugs and weapons around the neighbourhood – school uniform does not hold sway. It has little connection to their life.
Asking the group about the subjects they feel strongly about, or responding to local news stories from their neighbourhoods, and using these to create tasks, will provide a more enticing hook to pique their interest. After all, in many situations, the subject of a task is just the ‘hanger’ for the skills they need to learn – in this case, the elements of creating a persuasive piece, communicating perspectives and points of view.
Using AI, teachers have the capacity to provide this individualised content and personalised instruction and feedback, supporting learners by addressing their needs and ‘scaffolding’ their learning through adaptive teaching.
If the learner is having difficulty grasping a concept – especially an abstract one – AI can quickly produce several relevant analogies to help illustrate and explain. It can also be used to develop interactive learning modules, so the learner has more control and ownership over their learning. When engaged with their learning, pupils begin to build skills, increasing their confidence and commitment.
Identifying and discussing these skills and attitudes towards learning, with the pupil reflecting on how they learn and the ways they learn best, also gives them more agency and autonomy, thinking metacognitively.
Gaps in learning are often the cause of confusion, misunderstandings and misconceptions. If a child has been absent from school they may miss crucial concepts that form the building blocks to more complex ideas later in their school career. Without providing the foundations by filling in these gaps and unravelling the misconceptions, new learning may literally be impossible for them to understand, increasing frustration and feelings of failure. AI can help identify those gaps, scaffold learning and build understanding.
AI is by no means a replacement for teachers or teaching assistants, it is purely additional support. Coupled with approaches that promote engagement with learning, AI can enable these disadvantaged young people to access an education previously denied them.
According to the DfE, ‘All children are entitled to receive a world-class education that allows them to reach their potential and live a fulfilled life, regardless of their background.’ AI can help support the most disadvantaged young people towards gaining the education they deserve, and creating a pathway towards educational and social equity.