Connect with us

Technology

What can we expect from data protection in the year ahead?

Camilla Winlo, Head of Data Privacy at Gemserv

The past year has been a turbulent one for cybersecurity, with a number of high profile breaches hitting the headlines. The pandemic has of course played a central role in conversations around data privacy, while entire industries have been accused of data handling malpractice. So, what are the stories we can expect to see shaping the agenda in 2022?

Here are four developments we expect to see in the year ahead:

  • Polarisation around Covid vaccination data will increase

While there are early indications that the combination of vaccination, immunity from previous infection and the evolution of the virus may cause less significant symptoms in most, it still appears that Covid is capable of making unvaccinated and immunocompromised individuals very ill. In winter, this puts huge pressure on the NHS and the public purse, and we expect this to translate over time into increased pressure to encourage the unvaccinated to get vaccinated, and for society to impose different rules on the unvaccinated. We started to see this in 2021 and expect it to continue into 2022.

With different quarantine rules for vaccinated and unvaccinated employees and the possibility of compulsory vaccination on the horizon, more and more organisations are going to find themselves processing Covid vaccination data. Quarantine measures are set to continue to help stop the spread of the virus, which in turn, will mean that organisations will still need to incorporate a hybrid approach to work. Some employees will test positive for Covid and therefore will not be allowed to leave their homes, but they won’t have symptoms that would otherwise stop them from working. Others may have come into close contact with a positive case and will also need to isolate. All of this will have an impact on employers.

Going into 2022, we expect tensions between pro and anti-vaxxers to rise. This is unlikely to be mitigated much by the amount of real-world vaccine safety data that is available, which is what a lot of vax-hesitant people say they are waiting for, due to the polarisation of information availability and the fact that in some cases, vax hesitancy will be rooted in genuine and founded concerns, for example where individuals have health conditions that make taking the vaccine a more personally risky choice. That makes Covid status an employee safeguarding issue due to the risk of discrimination between employees. There will also be companies that want – or are compelled – to terminate unvaxxed employees, as well as some that will do the reverse.  We can expect to see these decisions appealed as the ‘grey area’ around what counts as a medical exemption is clarified.

  • Tighter regulations around ad tech

The European Data Protection Board (EDPB) published its 2021-2023 strategy in December, and part of that strategy includes more proactive monitoring of ad tech. Ad tech is under huge pressure to tighten up its data protection practices after the Irish Council for Civil Liberties sued a branch of the Interactive Advertising Bureau (IAB) and others over what it described as “the world’s largest data breach” in 2021.

IAB found itself under fire for its role facilitating a process known as real-time bidding, where personal data is passed between hundreds of ad brokers and related firms during an auction process in the moments before a website loads, on behalf of paying brands. During the milliseconds between clicking on a page and it loading, everything from the type of device an individual is using to limited location data and browsing history can be shared with brokers to better target that person.

The breach spurred numerous complaints from the likes of none of your business (noyb), the European centre for digital rights, and various court cases after finding that the ad tech industry is fundamentally unlawful because of the way it is structured. Better regulation around ad tech needs to be put into place not just for the advertisers themselves, but for online retailers, too. It’s going to be incredibly important for the economy as a whole that the ad tech industry gets this right, but there is a lot of work to do to get there.

  • Regulatory action around Artificial Intelligence (AI) will ramp up

While the European Data Protection Board (EDPB) strategy highlights the need for more proactive monitoring of AI use, the UK National Data Strategy focuses on making sure AI works and that the UK is a leader. Data privacy must be a priority or the result will be poor quality solutions that don’t work as intended. AI-specific regulations are set to be enforced and I think we’ll see some interesting actions.

After facial recognition company Clearview AI was issued a Notice of Intent to fine following a number of breaches of national data protection law, conversations around the practices of ethical data collection and analysis have come to the forefront of public attention. It’s essential that organisations that want to harness the possibilities of AI and data-driven innovation in the UK do so in a way that protects individuals.

Organisations should be entitled to trust that providers like Clearview AI are engaging in ethical practices and that their services can be used lawfully. It’s very reassuring to see the regulator taking strong action to make AI innovators trustworthy. Whether it’s fighting crime, preventing fraud or other forms of safeguarding through data, when the public and private sector combine, they must ensure the right processes are put in place in order to comply with data protection regulation.

  • Privacy Shield 2: A new basis for sharing data between the EU and the US

The Privacy Shield framework was the second attempt by the EU and the US to create a secure mechanism for data sharing. It was thrown out in court after judges deemed the framework insufficient to provide adequate safeguards for the transfer of personal data from the EU to the US, and they’ve been working ever since to replace it.

The exchange is a one-way deal – there are cultural differences between the EU and the US that mean that personal data in the EU is protected in different ways to personal data in the US. The purpose of Privacy Shield is to provide a way to allow EU data to be processed by US companies without losing those protections. I expect we will see some major announcements coming next year, which will include technological changes by Big Tech household names, and that in turn will lead to work for UK and EU businesses.

Regulations are indispensable to the proper functioning of economies and societies, and to protect those structures, we need to implement the right data protection measures. Having the right data protection regulations in place is critical to ensuring the proper functioning of organisations, and for ensuring that both customer and employee data is handled correctly. As businesses and governments continue to generate more data than ever, we need to take regulatory action to create secure, ethical data storage and sharing practices.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Hype, Hysteria & Hope: AI’s Evolutionary Journey and What it Means for Financial Services

Source: Finance Derivative

Written by Gabriel Hopkins, Chief Product Officer at Ripjar

Almost a year to the day since ChatGPT launched, the hype, hysteria, and hope around the technology shows little signs of abating. In recent weeks OpenAI chief Sam Altman was removed from his position, only to return some days later. Rishi Sunak hosted world leaders at the UK’s AI Safety Summit, interviewing the likes of Elon Musk in front of an assembly of world leaders and tech entrepreneurs. While behind the scenes, AI researchers are rumoured to be close to even more breakthroughs within weeks.

What does it all mean for those industries that want to benefit from AI but are unsure of the risks?

It’s possible that some forms of machine learning – what we used to call AI – have been around for a century. Since the early 1990s, those tools have been a key operational element of some banking, government, and corporate processes, while being notably absent from others.

So why the uneven adoption? Generally, that has been related to risk. For instance, AI tools are great for tasks like fraud detection. It’s a well-established that an algorithm can do things that analysts simply can’t by reviewing vast swathes of data in milliseconds. And that has become the norm, particularly because it is not essential to understand each and every decision in detail.

Other processes have been more resistant to change. Usually, that’s not because an algorithm couldn’t do better, but rather because – in areas such as credit scoring or money laundering detection – the potential for unexpected biases to creep in is unacceptable. That is particularly acute in credit scoring when a loan or mortgage can be declined due to non-financial characteristics.

While the adoption of older AI techniques has been progressing year after year, the arrival of Generative AI, characterised by ChatGPT, has changed everything. The potential for the new models – both good and bad – is huge, and commentary has divided accordingly. What is clear is that no organisation wants to miss out on the upside. Despite the talk about Generative and Frontier models, 2023 has been brimming with excitement about the revolution ahead.



Two Objectives

A primary use case for AI in the financial crime space is to detect and prevent fraudulent and criminal activity. Efforts are generally concentrated around two similar but different objectives. These are thwarting fraudulent activity – stopping you or your relative from getting defrauded – and adhering to existing regulatory guidelines to support anti-money laundering (AML), and combatting the financing of terrorism (CFT).

Historically, AI deployment in the AML and CFT areas has faced concerns about potentially overlooking critical instances compared to traditional rule-based methods. Within the past decade, and other regulators initiated a shift by encouraging innovation to help with AML and CFT cases. Despite the use of machine learning models in fraud prevention over the past decades, adoption in AML/CFT has been much slower with a prevalence for headlines and predications over actual action. The advent of Generative AI looks likely to change that equation dramatically.

One bright spot for AI in compliance over the last 5 years, has been in customer and counterparty screening, particularly when it comes to the vast quantities of data involved in high-quality Adverse Media (aka Negative News) screening where organisations look for the early signs of risk in the news media to protect themselves from potential issues.

The nature of high-volume screening against billions of unstructured documents has meant that the advantages of machine learning and artificial intelligence far outweigh the risks and enable organisations to undertake checks which would simply not be possible otherwise.

Now banks and other organisations want to go a stage further. As Generation AI models start to approach AGI (Artificial General Intelligence) where they can routinely outperform human analysts, the question is when, and not if, they can use the technology to better support decisions and potentially even make the decisions unilaterally.


AI Safety in Compliance

The 2023 AI Safety Summit was a significant milestone in acknowledging the importance of AI. The Summit resulted in 28 countries signing a declaration to continue meetings to address AI risks. The event led to the inauguration of the AI Safety Institute, which will contribute to future research and collaboration to ensure its safety.

Though there are advantages to having an international focus on the AI conversation, the GPT transformer models were the primary focus areas during the Summit. This poses a risk of oversimplifying or confusing the broader AI spectrum for unaccustomed individuals. There is a broad range of AI technologies with hugely varying characteristics. Regulators and others need to understand that complexity. Banks, government agencies, and global companies must exert a thoughtful approach to AI utilisation. They must emphasise its safe, careful, and explainable use when leveraged inside and outside of compliance frameworks.


The Road Ahead

The compliance landscape demands a review of standards for responsible AI use. It is essential to establish best practices and clear objectives to help steer organisations away from hastily assembled AI solutions that compromise accuracy. Accuracy, reliability, and innovation are equally important to mitigate fabrication or potential misinformation.

Within the banking sector, AI is being used to support compliance analysts already struggling with time constraints and growing regulatory responsibilities. AI can significantly aid teams by automating mundane tasks, augmenting decision-making processes, and enhancing fraud detection.

The UK can benefit from the latest opportunity. We should cultivate an innovation ecosystem with is receptive to AI innovation across fintech, regtech, and beyond. Clarity from government and thought leaders on AI tailored to practical implementations in the industry is key. We must also be open to welcoming new graduates from the growing global talent pool for AI to fortify the country’s position in pioneering AI-driven solutions and integrating them seamlessly. Amid industry change, prioritising and backing responsible AI deployment is crucial for the successful ongoing battle against all aspects of financial crime.

Continue Reading

Business

Using AI to support positive outcomes in alternative provision

By Fleur Sexton

Fleur Sexton, Deputy Lieutenant West Midlands and CEO of dynamic training provider, PET-Xi, with a reputation for success with the hardest to reach,

discusses using AI to support excluded pupils in alternative provision (AP)

Exclusion from school is often life-changing for the majority of vulnerable and disadvantaged young people who enter alternative provision (AP). Many face a bleak future, with just 4% of excluded pupils achieving a pass in English and maths GCSEs, and 50% becoming ‘not in  education, employment or training’ (NEET) post-16.

Often labelled ‘the pipeline to prison’, statistics gathered from prison inmates are undeniably convincing: 42% of prisoners were expelled or permanently excluded from school; 59% truanted; 47% of those entering prison have no school qualifications. With a prison service already in crisis, providing children with the ‘right support, right place, right time’, is not just an ethical response, it makes sound financial sense. Let’s invest in education, rather than incarceration.

‘Persistent disruptive behaviour’ – the most commonly cited reason for temporary or permanent exclusion from mainstream education – often results from unmet or undiagnosed special educational needs (SEN) or social, emotional and mental health (SEMH) needs. These pupils find themselves unable to cope in a mainstream environment, which impacts their mental health and personal wellbeing, and their abilities to engage in a positive way with the curriculum and the challenges of school routine. A multitude of factors all adding to their feelings of frustration and failure.

Between 2021/22 and 2022/23, councils across the country recorded a 61% rise in school exclusions, with overall exclusion figures rising by 50% compared to 2018/19. The latest statistics from the Department for  Education (DfE), show pupils with autism in England are nearly three times as likely to be suspended than their neurotypical peers. With 82% of young people in state-funded alternative provision (AP) with identified special educational needs (SEN) and social emotional and mental health (SEMH) needs, for many it is their last chance of gaining an education that is every child’s right.

The Department for Education’s (DfE) SEND and AP Improvement Plan (March 2023).reported, ‘82% of children and young people in state-place funded alternative provision have identified special educational needs (SEN) 2, and it (AP) is increasingly being used to supplement local SEND systems…’

Some pupils on waiting lists for AP placements have access to online lessons or tutors, others are simply at home, and not receiving an education. In oversubscribed AP settings, class sizes have had to be increased to accommodate demand, raising the pupil:teacher ratio, and decreasing the levels of support individuals receive. Other unregulated settings provide questionable educational advantage to attendees.

AI can help redress the balance and help provide effective AP. The first challenge for teachers in AP is to engage these young people back into learning. If the content of the curriculum used holds no relevance for a child already struggling to learn, the task becomes even more difficult. As adults we rarely engage with subjects that do not hold our interest – but often expect children to do so.

Using context that pupils recognise and relate to – making learning integral to the real world and more specifically, to their reality, provides a way in. A persuasive essay about school uniforms, may fire the debate for a successful learner, but it is probably not going to be a hot topic for a child struggling with a chaotic or dysfunctional home life. If that child is dealing with high levels of adversity – being a carer for a relative, keeping the household going, dealing with pressure to join local gangs, being coerced into couriering drugs and weapons around the neighbourhood – school uniform does not hold sway. It has little connection to their life.  

Asking the group about the subjects they feel strongly about, or responding to local news stories from their neighbourhoods, and using these to create tasks, will provide a more enticing hook to pique their interest. After all, in many situations, the subject of a task is  just the ‘hanger’ for the skills they need to learn – in this case, the elements of creating a persuasive piece, communicating perspectives and points of view.

Using AI, teachers have the capacity to provide this individualised content and personalised instruction and feedback, supporting learners by addressing their needs and ‘scaffolding’ their learning through adaptive teaching.

If the learner is having difficulty grasping a concept – especially an abstract one – AI can quickly produce several relevant analogies to help illustrate and explain. It can also be used to develop interactive learning modules, so the learner has more control and ownership over their learning. When engaged with their learning, pupils begin to build skills, increasing their confidence and commitment.

Identifying and discussing these skills and attitudes towards learning, with the pupil reflecting on how they learn and the ways they learn best, also gives them more agency and autonomy, thinking metacognitively.

Gaps in learning are often the cause of confusion, misunderstandings and misconceptions. If a child has been absent from school they may miss crucial concepts that form the building blocks to more complex ideas later in their school career. Without providing the foundations by filling in these gaps and unravelling the misconceptions, new learning may literally be impossible for them to understand, increasing frustration and feelings of failure. AI can help identify those gaps, scaffold learning and build understanding.

AI is by no means a replacement for teachers or teaching assistants, it is purely additional support. Coupled with approaches that promote engagement with learning, AI can enable these disadvantaged young people to access an education previously denied them.

According to the DfE, ‘All children are entitled to receive a world-class education that allows them to reach their potential and live a fulfilled life, regardless of their background.’ AI can help support the most disadvantaged young people towards gaining the education they deserve, and creating a pathway towards educational and social equity.

Continue Reading

Business

How collaborative robots can support productivity for 2024 

By Stacey Moser, Chief Commercial Officer, Universal Robots 

The past year has been no easy feat for businesses, with an unpredictable global economy hitting manufacturers particularly hard. As we soon move into 2024, there are no signs of this relenting, with the sector recording it’s worst performance in the UK since the 1980s. Simultaneously, the manufacturing industry is battling 74,000 unfilled vacancies, creating a £6.5 billion economic shortfall which is hindering the ability of companies to fight back. 

Increasing productivity will be a key focus in the pursuit of profitability for 2024. Accelerating adoption of automation within the manufacturing process offers a solution, with one study suggesting that automation has the potential to contribute $15 trillion USD to the global economy by 2030. However, many businesses at this time do not have the funds or resources to adopt large, expensive industrial robots. 

Enter collaborative robots (cobots), lightweight robotic arms that can automate repetitive tasks usually requiring the skills and manpower of human workers. Compatible with traditional industrial robots commonly employed by manufacturers, cobots work alongside humans to offer a wide range of benefits, all underpinned by the ability to improve productivity, as well as assure staff safety, and improve worker wellbeing.  

Reimagining production lines for 2024 

Due to the nature of the industry, manufacturing workers often face so-called dull, dirty, dangerous jobs. For example, palletisation – the process of stacking, loading and securing goods onto pallets – has traditionally been a manual operation that requires staff to perform strenuous tasks repetitively. Long term, this can cause issues for employees such as musculoskeletal damage, as well as mental fatigue. As to the factory output, goods of inconsistent quality subsequently become more commonplace. 

Working alongside humans, cobots can take on these undesirable tasks, preventing staff injuries while improving manufacturing quality as human error is reduced. With minimal human effort necessary to operate cobots, these autonomous colleagues can help mitigate the labour shortage issue that is currently leaving many manufacturers vulnerable. Cobots enable workers to be more productive as they focus on more valuable tasks that require more cognition, dexterity and reason, in turn unlocking further business value.  

As employees take on more rewarding work, job satisfaction naturally improves. In a sector that faces labour gaps, this can go far in improving employee retention. Prospective workers who may have previously been drawn away from the manufacturing industry by the necessity of performing risky and monotonous tasks, may now see a different future ahead, where the prospect of using modern collaborative technologies excites and enthuses. Reducing turnover of employees and creating a stable pipeline of future talent also ensures manufacturers can not only maintain, but increase productivity, as another year of global economic volatility approaches.  

Reducing gaps between competitors 

With the current cost of living crisis projected to persist through 2024, consumer behaviour is becoming increasingly unpredictable. Cobots provide the flexibility to adapt to whatever circumstances manufacturers find themselves in, whether production must be scaled up or down, or an expansion into new markets is required. Those that cannot swiftly adapt, risk being left behind. Automation therefore becomes a necessity for manufacturers if they want to keep up with the competition.  

Not only can cobots provide flexibility in scaling the size of production, but from palletisation and machine tending, to quality inspection, cobots can perform any task without a need for rest. If necessary, cobots can be programmed and redeployed to perform multiple tasks and can switch between these with ease. By eliminating potential pain points between different functions, cobots can slash factory downtime, improving productivity across the board.  

With less human labour required to man machinery, the possibility of longer machine tending shifts is unlocked. This could mean the development of 24/7 factories, with downtime kept to a bare minimum. Manufacturers could see goods produced and out of the door faster than ever before, a colossal productivity boost to consider as 2024 approaches.  

Challenging current misconceptions 

Despite cobots being widely available to manufacturers of all sizes, these modern automation solutions have yet to be considered a possibility by many. Research shows that the biggest hurdles to implementing automation are capital cost and lack of internal knowledge and experience with the technology. However, on the cost issue, it’s been proven that cobots can achieve return on investment (ROI) in as little as 12 months.  

A common myth is that cobots are difficult to implement or use. However, the technology is designed specifically with an approachable user interface, meaning employees do not require specialist expertise to operate or interact. Meanwhile, software designed for use alongside cobots, such as Universal Robots’ PolyScope, means that programming cobots to complete tasks becomes intuitive. Modern cobots – such as the UR20 – are lightweight and have a small footprint, eliminating common issues around a lack of factory floor space too.  

Finally, the advent of AI and automation tools such as Chat-GPT has fuelled fears that robots may soon takeover human jobs. Manufacturers who fear backlash may be hesitant to begin deploying and reaping the benefits of collaborative automation. It is important to remember that cobots are designed to work alongside humans, not replace them. Using cobots on a factory floor for example, allows employees to work with robots, not like them. As manufacturers look to improve business resilience in 2024, welcoming cobots into the workforce can also facilitate massive productivity gains.  

Forecasting for the year ahead 

As manufacturers carefully consider budgets for 2024, automation and cobot technologies should be at the very top of the list. Deployment of these technologies is now essential for increasing productivity, quality and efficiency. Organisations that hesitate to invest today risk losing hard-gained momentum to more innovative competitors.  

The future of manufacturing is here. And belongs to those who will innovate and automate. 

Continue Reading

Copyright © 2021 Futures Parity.