Technology
Embracing the dawn of ethical Generative AI
by Hana Rizvić, Head of AI at Intellias
As we mark one year since the introduction of ChatGPT, it’s a good time to consider what might come next. The generative AI ‘poster child’, ChatGPT has become the most widely used AI tool, responding to millions of user queries per day. What’s more, it has attracted huge levels of investment and recently occupied the mainstream news headlines following the sacking and reinstatement of its CEO.
People across the world have had their first real experience of a Large Language Model (LLM) by using ChatGPT, the technology has already been banned in many education settings and has also helped fuel the debate about how AI technologies can and should be regulated. The recent Global AI Safety Summit, for example, brought together over 100 leaders from government, industry and academia in an effort to generate some consensus on matters including AI safety and testing. Similarly, the upcoming EU AI Act looks set to play a pivotal role in the future GenAI landscape, setting a common regulatory and legal framework. Its aim is to “make sure that AI systems used in the EU are safe, transparent, traceable, non-discriminatory and environmentally friendly. AI systems should be overseen by people, rather than by automation, to prevent harmful outcomes.”
Slated to come into effect in 2024-25, it also introduces an 18-month transition period for organisations to adjust, reminiscent of the GDPR’s implementation and overall marks a significant step towards ensuring that AI advancements are balanced with the need for fairness, accountability, and transparency.
Welcome ethical GenAI
We are entering a critical phase where innovation will be more closely aligned with ethical considerations. The impact of AI is already evident in various aspects of life, pointing to a future where, ideally, its use is not only widespread but also guided by principled decision-making. In this context, the emphasis will be on using AI to address appropriate problems, not just any problem.
In particular, the early iterations of GenAI platforms have demonstrated their potential but also the need for careful application. In many organisations, GenAI has already improved both customer and employee experiences, with advanced chatbots capable of mimicking human interaction taking automated customer service to a whole new level by providing quick and relevant responses. In an ideal world, this use case highlights AI’s dual purpose: to enhance human capabilities while maintaining a focus on human-centred experiences.
However, organisations need to ensure that the deployment of AI is done efficiently, with ethicality, accuracy, and bias in mind. During the transition to AI, organisations must ensure they are putting to bed any concerns of job displacement as many fear its effect on the human workforce. It is vital to ensure employees understand the place that AI will have in their organisation. Ultimately, it needs to be reiterated that the technology will enhance human positions, not replace them, and that it will create new job opportunities and provide routes to transform existing career paths.
Dig a little deeper, and it’s clear that a diverse range of sectors, including everything from healthcare and marketing to finance and entertainment, are poised for significant levels of innovation as the adoption of AI accelerates. The next phase of development and implementation is expected to deliver a range of strategic breakthroughs, from increased efficiency, improved decision-making and the greater use of automation – the challenge lies in ensuring these developments are ethical and beneficial for all.
The Inclusive and collaborative path ahead
Looking further ahead, AI has tremendous potential to bridge the current skills gap, especially when combined with strategic investment and education. Ultimately, the aim should be to develop inclusive, ethical AI systems that serve the broader society, including marginalised groups. In an era where AI has the potential to impact every conceivable field of industry, commerce and broader society, international cooperation is essential. Ensuring that AI’s development and implementation are regulated, ethical, and inclusive is crucial to protect the best interests of humanity.
As we navigate this evolving AI landscape, our collective focus should be on responsibly utilising AI’s capabilities, ensuring it enhances rather than replaces human endeavour while adhering to the highest ethical standards. The future of AI, full of potential, requires a thoughtful and collaborative approach to fully realise its potential in the long term.
You may like
Business
Driving UK business growth with AI reskilling, even during economic uncertainty
Alexia Pedersen, SVP International at O’Reilly
Amid ongoing economic challenges, UK businesses are grappling with salary stagnation and limited hiring. Employees, eager to advance their careers, are turning to digital reskilling as a pathway forward. Our latest research found that almost four in five (79%) UK employers have seen staff request digital upskilling opportunities over the last twelve months to strengthen their career prospects, particularly in roles linked to emerging technologies like AI and machine learning (ML).
Our platform has witnessed a surge in demand for learning resources on AI programming (66%), data analysis (59%), and operational AI/ML (54%) learning materials. We’ve also seen an uptick in demand for general AI literacy as IT teams encounter the hallucinations generative AI tools can exhibit.
However, given the accelerated integration of generative AI in most enterprises, the need for general AI literacy has extended beyond IT teams. In fact, 60% of enterprises are expected to have adopted generative AI in some form by the end of this year. Yet, while most business leaders agree their workforces need to be reskilled in GenAI, only 10% of workers are currently trained in GenAI tools. Now, non-technical employees are now seeking reskilling opportunities in AI and ML, cybersecurity, data analysis and programming.
This shift reflects widespread recognition of how emerging technologies can redefine roles and unlock new opportunities. So, how can employers ensure that every employee – not just IT – develops the skills to navigate and leverage AI and other digital tools?
Cultivating a culture of continuous learning
The integration of digital technologies requires more than just adopting the latest tools; it demands a skilled workforce committed to long-term innovation and growth. Businesses deploying AI must prepare every employee to effectively use these tools. Here, a continuous learning approach will ensure that digital transformation benefits the organisation at every level, driving resilience and adaptability within an evolving tech landscape.
Embedding learning in daily workflows, encouraging curiosity, and supporting tailored development initiatives can help achieve this goal. Cross-functional collaboration and knowledge-sharing can help to break down silos, allowing diverse perspectives to be shared amongst teams.
To foster a culture of continuous learning, people teams should emphasise to management the importance of “re-recruiting” to highlight the value of continuously investing in and engaging with talent as consciously as during the hiring process. The best results stem from having an executive sponsor who leads by example, championing learning at all levels. At the same time, employees should feel empowered to take ownership of their own growth, creating a culture where development is an ongoing, shared responsibility between individuals and the organisation.
Joining a company is only the beginning, and sustaining a valuable relationship depends on both the organisation’s support and the employee’s commitment to their own continuous development. To thrive, employees must actively seek out skill-building opportunities and leverage the learning resources available to them. Doing so will help employees remain agile within an evolving technological landscape, while also enhancing their own productivity and contributing to overall organisational success.
Real-time learning
For employees seeking opportunities for personal growth, to bridge the gap between learning and day-to-day responsibilities, employers can harness the ‘in the flow of work’ approach to provide staff with real-time access to quality learning content.
This concept was coined by Josh Bersin to describe a paradigm in which employees learn something new, quickly apply it and return to their work in progress. It’s different from traditional learning approaches like attending a seminar or conference. These learning formats are effective, but many employees simply don’t have the time to devote to them or they prefer to learn at a time that suits them best.
Instead, it entails providing employees with tools that allow them to quickly find contextually relevant answers to their questions at a time that suits their schedule. Companies can offer ‘in the flow of work’ learning opportunities via an L&D partner to tailor materials to an individual’s unique learning style and objectives.
This is particularly important not only for young talent who are new to the workforce but also for existing employees who are proactively seeking opportunities to develop their skills and advance their careers. In turn, this approach to workplace learning will increase employee engagement and productivity, fostering innovation and growth that improves the bottom line.
Preparing for the future
As businesses face a rapidly evolving landscape, a continuous learning strategy focused on digital reskilling and upskilling can help them remain competitive. It empowers employees to take charge of their personal growth, fostering a resilient workforce prepared for tomorrow’s challenges.
For companies navigating hiring freezes or budget constraints, prioritising AI literacy and skills development amongst their employees in critical areas such as cybersecurity, cloud, and data analysis can help drive productivity and innovation while ensuring that organisations remain agile during times of technological change. Above all, supporting reskilling today will develop the foundations for a thriving, adaptable workforce ready to face tomorrow’s challenges.
Business
Turning the tide: how Gen AI gives legacy banks a new edge in customer-first innovation
Source: Finance Derivative
Author: Korbinian Krainau, Associate Managing Director, Financial Services
Imagine traditional banks as sturdy old ships, built to weather any storm. They come with history, reputation and respect. But with the shifting tides of customer expectations, they could find themselves not quite equipped for today’s digital currents.
In fact, according to the third edition of Publicis Sapient’s Global Banking Benchmark Study 2024, 75% of legacy banks admit they’re struggling to navigate these waters.
So, what can they do to keep themselves from being overtaken by swift, digital-first, tech-optimised vessels?
Enter Generative AI (Gen AI) – the wind in their sails, propelling legacy banks forward as they steer into a new era of customer engagement.
Understanding the gap in customer expectations
If legacy banks want to understand why challenger banks are often faring better in the eyes of customers, they need to recognize both what their competitors are succeeding at and what is holding them back.
Digital-first banks have raised customer expectations of what modern banking looks like. Their user-intuitive, friendly engagement models, optimised for online distribution and servicing, are giving users all the information they need quickly and efficiently. At the same time, 66% of legacy banks believe their systems are preventing them from providing the experience customers expect.
While understanding and comparing performance helps set ambition levels, the ‘how’ will unlock new competitive advantages.
Innovation from the inside out
This is where whoever masters Gen AI will create a competitive moat. When looking to elevate customer-facing capabilities, the technology will enable stronger data analysis (in combination with traditional analytics and machine learning), scaled personalisation, and automation across many services. Behind the scenes, it will empower legacy banks to adapt and innovate more nimbly, embracing the ‘challenger mindset’.
44% of banks are already investing in AI tools as a strategic choice to create an AI-ready culture in their businesses. But it’s not as simple as plugging in a quick ‘AI fix’. Real change requires a thorough understanding of the technology, the most impactful use cases, and adequate organisational adaptations.
That means educating teams and building AI proficiency across departments to maximise the technology’s benefits.
This is something Publicis Sapient has done while working with Deutsche Bank to lay the path for its use of Gen AI. It has helped the bank transform its business, starting with successfully laying the foundations in cloud and data before building AI components and launching impactful use cases to prove these components.
Crucially, it considered the role of people: customers, leadership, employees, culture and change management, realising value on both sides of the cost-to-income ratio.
This comprehensive implementation of AI across a ‘legacy’ bank’s operations improved its agility and aligned teams towards customer-centric innovation.
Proactive AI strategy for customer experience
One of the promising applications of GenAI sits at the intersection of customer insights and personalisation of services.
The Global Banking Benchmark Survey revealed that 42% of banks are focused on personalising customer journeys. New technologies will present even more opportunities to personalise at scale, with the help of data integration and AI. More sophisticated models will create more tailored experiences. This will create more value for both the customer and the bank, as customers are likely to engage more frequently and at the right moments.
A faster-paced, data-driven engagement model gives banks the chance to transform their slow-moving ships into agile, wave-skipping, and responsive organisations.
Giving a competitive advantage
Gen AI acts both as a powerful new compass and turbo-engine, levelling the playing field with challengers and guiding legacy banks through rough waters. They can chart a course that not only keeps customers on board, but also attracts new ones who expect smooth, standout digital experiences—and gain an edge over the competition
Business
Building an impactful security training programme to handle the volume and sophistication of today’s AI-enabled cyberattacks
By Alexia Pedersen, SVP International of O’Reilly
Digital assaults that are underpinned by AI are quickly becoming one of the most predominant issues on the planet, with the National Cyber Security Centre warning that the use of AI for malicious purposes will significantly shape the threat landscape as we know it today. Whether it’s sophisticated phishing emails or deepfake videos, this technology is enabling relatively unskilled threat actors to carry out more effective access and information-gathering operations than ever before.
On top of this, O’Reilly’s research highlights that nearly a quarter (24%) of learning professionals within British tech companies say cybersecurity is the digital skill most are lacking. As such, the vast majority (88%) of companies plan to spend more than £25,000 in the next twelve months to fill crucial roles, with cybersecurity top of the priority list.
Ultimately, the dual crisis of AI-enabled threats and a widening skills gap is not one that companies can hire their way out of. So, how can organisations and their employees keep pace with the sophistication and volume of attacks? And will the EU AI Act help?
The evolving regulatory landscape
While the EU’s AI Act is a significant step forward in regulating AI to ensure its safe and ethical development, there is a long way to go before we can secure our digital future.
Today, the Act focuses on security, transparency, and accountability to mitigate the risks associated with AI. By imposing stringent security requirements on high-risk AI systems – like those used in critical infrastructure – the Act ensures these systems are designed to be accurate, robust, and secure against unauthorised access and manipulation. It also requires these systems to have robust cybersecurity measures in place, including regular security assessments, vulnerability management, and incident response plans.
Furthermore, the Act mandates transparency in the development and deployment of AI systems – providing clear information about the system’s capabilities, limitations, and potential risks. Meanwhile, companies developing and deploying high-risk AI systems will be held accountable for any harm caused by their systems. This creates a strong incentive for organisations to prioritise cybersecurity and ensure the security of their AI systems.
The AI Act also emphasises the importance of mitigating bias and discrimination in AI systems. This includes ensuring that AI systems are trained on diverse and representative data to avoid unfair outcomes. By promoting fairness and non-discrimination, the AI Act indirectly contributes to a more secure digital environment.
As the regulatory environment continues to evolve, organisations have a responsibility to educate their staff on the ever-evolving risks posed by AI-enabled cyberattacks. We recommend keeping the following key steps in mind for building an impactful, AI-related security training programme.
- Identify the key stakeholders that can drive the programme forward
Firstly, it’s deciding who should take charge. Ideally, the leadership of your programme should be a collaborative effort between IT and those responsible for learning and development. With IT specialists providing the technical expertise, ensuring the content is relevant and appropriately complex, while learning professionals contribute their knowledge of learning strategies, programme design, and evaluation to ensure effective delivery.
However, given the complexities of today’s threat landscape, it’s important that leadership is also involved to align the programme with the organisation’s strategic goals. Emerging roles like Digital Transformation Leaders and Chief AI Officers, are becoming increasingly critical stakeholders and involving them in this process will help support change management as a new initiative gets rolled out.
- Align your unique organisational needs with your programme
The next key step is to assess your organisation’s current needs and skill gaps against future needs. By engaging with all stakeholders, from leadership to employees and IT specialists, organisations will gain a comprehensive understanding of their unique technology landscape. Focus on the relevancy, variety, and flexibility of available high-quality learning content when rolling out a news skills programme. This approach will guarantee the programme addresses current industry trends and incorporates your organisation’s professional IT certifications, while also anticipating future needs.
- Maximising impact with a blended learning approach
A blended learning approach is important. After all, your education programme must cater to a variety of learning styles and paces, so a combination of theoretical learning and hands-on practice is important to provide staff with robust and thorough knowledge.
Your programme should therefore integrate a mix of learning channels including digital learning, webinars, workshops, and one-on-one mentorship. Self-paced e-learning modules, for example, will allow for flexibility while scheduled sessions offer real-time interaction. At the same time, workshops, mentoring, and on-the-job practice will offer more opportunities for experiential learning. Ultimately, a mix of content to suit different learning styles and abilities will make the training accessible, engaging and inclusive for all designated participants.
- Data and insights: Ways to measure success
Once up and running, continuous monitoring and evaluation of skill development will enable you to gauge the effectiveness and make refinements where needed. Success for your training programme can be gauged through various methods, with a key one being regular, technical assessments or certifications to verify the development of skills.
At the same time, you should conduct regular reviews to build a culture of learning, checking in with managers to assess progress and adapt as needed. Longer-term, you should also measure changes in performance metrics post-training, such as the reduction in IT-related errors or increased productivity in assigned tasks. In addition, build engagement plans and activities to maintain this momentum. This combination will allow you to improve the programme in real time and address your employees’ dynamic learning needs.
Looking ahead, business leaders need to put adequate investment behind the development of education programmes that educate staff on the risks posed by AI-enabled cyberattacks. This should be driven by IT and learning professionals, given the combination of their indispensable expertise will maximise effectiveness.
Both stakeholders must spend time pinpointing a diverse range of employees to drive forward their training programme, as well as identifying their company’s unique operational needs to ensure training is tailored and highly relevant. As an example, in Q2 2024, Check Point Research reported a 30% YoY increase in cyberattacks globally, reaching over 1,600 attacks per organisation per week. As AI initiatives continue to expand, awareness and skills in cybersecurity will be essential.
Whether you are developing AI solutions in-house, purchasing third-party technology with embedded AI, or partnering with AI tools, it’s critical to have a plan in place and implement comprehensive security training across the organisation. Only when armed with this foundational knowledge will learning professionals and IT leaders be empowered to identify the most suitable L&D partner that can support their unique needs and objectives.