Technology
Ethical AI: Preparing Your Organisation for the Future of AI
Rosemary J Thomas, Senior Technical Researcher, AI Labs Version 1
Artificial intelligence is changing the world, generating countless new opportunities for organisations and individuals. Conversely, it also poses several known ethical and safety risks, such as bias, discrimination, privacy violations, alongside its potential to negatively impact society, well-being, and nature. It is therefore fundamental that this groundbreaking technology is approached with an ethical mindset, adapting practices to make sure it is used in a responsible, trustworthy, and beneficial way.
To achieve this, first we need to understand what an ethical AI mindset is, why it needs to be central, and how we can establish ethical principles and direct behavioural changes across an organisation. We must then develop a plan to steer ethical AI from within and be prepared to take liability for the outcomes of any AI system.
What is an ethical AI mindset
An ethical AI mindset is one that acknowledges the technology’s influence on people, society, and the world, and understands its potential consequences. It is based on the perception that AI is a dominant force that can sculpt the future of humankind. An ethical AI mindset ensures AI is allied with human principles and goals, and that it is used to support the common good and the ethical development of all.
It is not only about preventing or moderating the adverse effects of AI, but also about exploiting its immense capability and prospects. This includes developing and employing AI systems that are ethical, safe, fair, transparent, responsible, and inclusive, and that respect human values, autonomy, and diversity. It also means ensuring that AI is open, reasonably priced, and useful for everyone – especially the most susceptible and marginalised clusters in our society.
Why you need an ethical AI mindset
Functioning with an ethical AI mindset is essential[1]. Not only because it is the right thing to do, but also because it is expected, with research showing customers are far less likely to buy from unethical establishments. As AI evolves, the expectation for businesses to use it responsibly will continue to grow.
Adopting an ethical AI mindset can also help in adhering to current, and continuously developing, regulation and guidelines. Governing bodies around the world are establishing numerous frameworks and standards to make sure AI is used in an ethical and safe way and, by creating an ethical AI mindset, we can ensure AI systems meet these requirements, and prevent any prospective fines, penalties, or court cases.
Additionally, the right mindset will promote the development of AI systems that are more helpful, competent, and pioneering. By studying the ethical and social dimensions of AI, we can invent systems that are more aligned with the needs, choices, and principles of our customers and stakeholders, and can provide moral solutions and enhanced user experiences.
Ethical AI as the business differentiator
Fostering an ethical AI mindset is not a matter of singular choice or accountability, it is a united, organisational undertaking. To integrate an ethical culture and steer behavioural changes across the business, we need to take a universal and methodical approach.
It is important that the entire workforce, including executives and leadership, are educated on the need for AI ethics and its use as a business differentiator[2]. To achieve this, consider taking a mixed approach to increase awareness across the company, using mediums such as webinars, newsletters, podcasts, blogs, or social media. For example, your company website can be used to share significant examples, case studies, best practices, and lessons learned from around the globe where AI practices have effectively been implemented. In addition, guest sessions with researchers, consultants, or even collaborations with academic research institutions can help to communicate insights and guidance on AI ethics and showcase it as a business differentiator.
It is also essential to take responsibility for the consequences of any AI system that is developed for practical applications, despite where organisations or products sits in the value chain. This will help build credibility and transparency with stakeholders, customers, and the public.
Evaluating ethics in AI
We cannot monitor or manage what we cannot review, which is why we must establish a method of evaluating ethics in AI. There are a number of tools and systems than can be used to steer ethical AI, which can be supported by ethical AI frameworks, authority structures and the Ethics Canvas.
An ethical AI framework is a group of values and principles that acts as a handbook for your organisation’s use of AI. This can be adopted, adapted, or built to suit your organisation’s own goals and values, with the stakeholders involved in its creation. An example of this can be seen in the UK Government’s Ethical AI Framework[3], and the Information Commissioner’s Office’s AI and data protection risk toolkit[4] which covers all ethical risks in the lifecycle stages – from business requirements and design to deployment and monitoring for AI systems.
An ethical AI authority structure is a group of roles, obligations and methods that make sure your ethical AI framework is followed and reviewed. You can establish an ethical AI authority structure that covers several aspects and degrees of your organisation and delegates clear obligations to each stakeholder.
The Ethics Canvas can be used in AI engagements to help build AI systems with ethics integrated into development. It helps teams identify potential ethical issues that could arise from the use of AI and develop guidelines to avoid them. It also promotes transparency by providing clear explanations of how the technology works and how decisions are made and can further increase stakeholder engagement to gather input and feedback on the ethical aspects of the AI project. This canvas helps to structure risk assessment and can serve as a communication tool to convey the organisation’s commitment to ethical AI practices.
Ethical AI implications
Any innovation process, whether it involves AI or not, can be marred a fear of failure and the desire to be successful in the first attempt. But failures should be regarded as lessons and used to improve ethical experiences in AI.
To ensure AI is being used responsibly, we need to identify what ethics means in the context of our business operations. Once this has been established, we can personalise our message to the target stakeholders, staying within our own definition of ethics and including the use of AI within our organisation’s wider purpose, mission, and vision.
In doing so, we can draw more attention towards the need for responsible use policies and an ethical approach to AI, which will be increasingly important as the capabilities of AI evolve, and its prevalence within businesses continues to grow.
[1] https://www.mckinsey.com/featured-insights/in-the-balance/from-principles-to-practice-putting-ai-ethics-into-action
[2] https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1258721/full
[3] https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety
[4] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ai-and-data-protection-risk-toolkit/
You may like
Business
Why Resilience Is Replacing Prevention as the Defining Cybersecurity Strategy
by Manuel Sanchez, Information Security and Compliance Specialist, iManage
For decades, cybersecurity centered around prevention. Build the right walls around your perimeter, deploy the right tools, train your people not to click the wrong links, and you could keep the bad actors out.
Today, the question driving security strategy is no longer “how do we stop a breach?” but “how do we survive one?” It is a subtle but profound shift in philosophy, and it is reshaping everything from how IT and Security leaders structure their teams to how they select their vendors and deploy AI.
Rehearsing for the worst
The practical expression of this shift is visible in how security teams are being restructured. Organisations are establishing dedicated disaster recovery teams – not to prevent incidents, but to contain and recover from them when they occur. These teams maintain detailed, regularly updated playbooks covering everything from backup restoration to stakeholder communications, with roles pre-assigned and procedures rehearsed well in advance.
In many ways, this mirrors the logic behind disaster drills: fire alarms matter, but knowing the evacuation routes and the post-incident recovery plan determines how well an organisation survives. Critically, responsibility cannot rest with the CISO alone. Business continuity after a cyber incident is a whole-company challenge – which means every core part of the organisation is involved to sustain critical business operations.
Governance in the gray areas
Running alongside this shift is a governance crisis that is easy to underestimate until it becomes a serious risk. As organisations adopt more applications across more vendors and hosting services, the shared responsibility model that was supposed to keep cloud accountability clear has become increasingly difficult to enforce.
The sheer volume of cloud applications in use at any given enterprise is too vast for consistent governance under current approaches – and bad actors have become skilled at identifying exactly where vendor responsibility ends, and customer accountability begins, then operating precisely in that “gray area”. Being aware of this risk and putting preventative measures in place is important, but recognising the role these cloud applications play and the impact to key business operations if these applications were compromised, is critical.
Meanwhile, data volumes continue to grow exponentially, and unstructured data continues to accumulate in the background across many digital systems. Why is this important? If you don’t know what data you have, where it is stored, who has access to it, and, most importantly, how it is protected – onsite or cloud backup – this makes the recovery process a lot harder.
AI agents on the rise – and with it new risks
Although the focus of this article is on resilience, prevention must still remain an essential part of your defences. On that front, the accelerating adoption of autonomous AI in cyber defence tasks is reshaping security operations as visibly as anything else happening in the field right now. The volume, speed, and sophistication of modern threats have simply outpaced what human analysts can manage in real time.
The shift is toward AI that doesn’t just flag anomalies for human review, but actively detects, analyses, and neutralises threats as they emerge, even using predictive models to anticipate attacks before they fully materialise. This frees human experts to focus on strategic decisions and complex defence work rather than spending their days firefighting.
Autonomous AI does, however, introduce risks of its own. When AI agents operate across systems – accessing sensitive repositories, triggering actions, sharing data – they expand the attack surface in ways that aren’t always immediately visible.
Managing the digital identities of AI agents, much like managing employee access credentials, is becoming a critical security discipline. Accordingly, comprehensive traceability frameworks that log every action an agent takes are no longer optional; they are the foundation of responsible AI deployment in any security context.
The supply chain wake-up call
The case for moving from a “prevention” mindset to a “resilience” one is further bolstered by recent high-profile breaches via compromised managed service providers, which have forced a fundamental reset in how organisations evaluate their vendors.
The era of cost-first selection is over. Security credentials, demonstrated through continuous and verifiable evidence, are now non-negotiable for any provider hoping to retain enterprise clients – and what organisations are demanding goes well beyond point-in-time audits. They want real-time visibility into every third-party integration, every software update, and every vendor interaction – including the cloud services the vendors themselves use.
“Trust but verify” has become the operational standard, and providers who cannot demonstrate validated controls and live monitoring are finding themselves out of contention. It is a structural shift that will reshape the vendor landscape considerably — and it is already underway.
A new era demands a new approach
In the end, prevention still matters, but resilience – instilled via the key focus areas above – is what turns disruption into survivable events rather than existential crises. The organisations that are honest about the limits of prevention and embrace the shift towards resilience won’t just better withstand the next wave of attacks – they’ll be differentiating themselves from competitors still clinging to yesterday’s playbook.
Business
Adapting compliance in a fragmented regulatory world
Rasha Abdel Jalil, Director of Financial Crime & Compliance at Eastnets, discusses the operational and strategic shifts needed to stay ahead of regulatory compliance in 2025 and beyond.
As we move through 2025, financial institutions face an unprecedented wave of regulatory change. From the EU’s Digital Operational Resilience Act (DORA) to the UK’s Basel 3.1 rollout and upcoming PSD3, the volume and velocity of new requirements are constantly reshaping how banks operate.
But it’s not just the sheer number of regulations that’s creating pressure. It’s the fragmentation and unpredictability. Jurisdictions are moving at different speeds, with overlapping deadlines and shifting expectations. Regulators are tightening controls, accelerating timelines and increasing penalties for non-compliance. And for financial compliance teams, it means navigating a landscape where the goalposts are constantly shifting.
Financial institutions must now strike a delicate balance: staying agile enough to respond to rapid regulatory shifts, while making sure their compliance frameworks are robust, scalable and future-ready.
The new regulatory compliance reality
By October of this year, financial institutions will have to navigate a dense cluster of regulatory compliance deadlines, each with its own scope, jurisdictional nuance and operational impact. From updated Common Reporting Standard (CRS) obligations, which applies to over 100 countries around the world, to Australia’s new Prudential Standard (CPS) 230 on operational risk, the scope of change is both global and granular.
Layered on top are sweeping EU regulations like the AI Act and the Instant Payments Regulation, the latter coming into force in October. These frameworks introduce new rules and redefine how institutions must manage data, risk and operational resilience, forcing financial compliance teams to juggle multiple reporting and governance requirements. A notable development is Verification of Payee (VOP), which adds a crucial layer of fraud protection for instant payments. This directly aligns with the regulator’s focus on instant payment security and compliance.
The result is a compliance environment that’s increasingly fragmented and unforgiving. In fact, 75% of compliance decision makers in Europe’s financial services sector agree that regulatory demands on their compliance teams have significantly increased over the past year. To put it simply, many are struggling to keep pace with regulatory change.
But why is it so difficult for teams to adapt?
The answer lies in a perfect storm of structural and operational challenges. In many organisations, compliance data is trapped in silos spread across departments, jurisdictions and legacy platforms. Traditional approaches – built around periodic reviews, static controls and manual processes – are no longer fit for purpose. Yet despite mounting pressure, many teams face internal resistance to changing established ways of working, which further slows progress and reinforces outdated models. Meanwhile, the pace of regulatory change continues to accelerate, customer expectations are rising and geopolitical uncertainty adds further complexity.
At the same time, institutions are facing a growing compliance talent gap. As regulatory expectations become more complex, the skills required to manage them are evolving. Yet many firms are struggling to find and retain professionals with the right mix of legal, technical and operational expertise. Experienced professionals are retiring en-masse, while nearly half of the new entrants lack the right experience needed to step into these roles effectively. And as AI tools become more central to investigative and decision-making processes, the need for technical fluency within compliance teams is growing faster than organisations can upskill. This shortage is leaving compliance teams overstretched, under-resourced and increasingly reliant on outdated tools and processes.
Therefore, in this changing environment, the question suddenly becomes how can institutions adapt?
Staying compliant in a shifting landscape
The pressure to adapt is real, but so is the opportunity. Institutions that reframe compliance as a proactive, technology-driven capability can build a more resilient and responsive foundation that’s now essential to staying ahead of regulatory change.
This begins with real-time visibility. As regulatory timelines change and expectations rise, institutions need systems that can surface compliance risks as they emerge, not weeks or months later. This means adopting tools that provide continuous monitoring, automated alerts and dynamic reporting.
But visibility alone isn’t enough. To act on insights effectively, institutions also need interoperability – the ability to unify data from across departments, jurisdictions and platforms. A modern compliance architecture must consolidate inputs from siloed systems into a unified case manager to support cross-regulatory reporting and governance. This not only improves accuracy and efficiency but also allows for faster, more coordinated responses to regulatory change.
To manage growing complexity at scale, many institutions are now turning to AI-powered compliance tools. Traditional rules-based systems often struggle to distinguish between suspicious and benign activity, leading to high false positive rates and operational inefficiencies. AI, by contrast, can learn from historical data to detect subtle anomalies, adapt to evolving fraud tactics and prioritise high-risk alerts with greater precision.
When layered with alert triage capabilities, AI can intelligently suppress low-value alerts and false positives, freeing up human investigators to focus on genuinely suspicious activity. At the more advanced stages, deep learning models can detect behavioural changes and suspicious network clusters, providing a multi-dimensional view of risk that static systems simply can’t match.
Of course, transparency and explainability in AI models are crucial. With regulations like the EU AI Act mandating interpretability in AI-driven decisions, institutions must make sure that every alert or action taken by an AI system is auditable and understandable. This includes clear justifications, visual tools such as link analysis, and detailed logs that support human oversight.
Alongside AI, automation continues to play a key role in modern compliance strategies. Automated sanction screening tools and watchlist screening, for example, help institutions maintain consistency and accuracy across jurisdictions, especially as global lists evolve in response to geopolitical events.
Similarly, customisable regulatory reporting tools, powered by automation, allow compliance teams to adapt to shifting requirements under various frameworks. One example is the upcoming enforcement of ISO 20022, which introduces a global standard for payment messaging. Its structured data format demands upgraded systems and more precise compliance screening, making automation and data interoperability more critical than ever.
This is particularly important in light of the ongoing talent shortages across the sector. With newer entrants still building the necessary expertise, automation and AI can help bridge the gap and allow teams to focus on complex tasks instead.
The future of compliance
As the regulatory compliance landscape becomes more fragmented, compliance can no longer be treated as a tick-box exercise. It must evolve into a dynamic, intelligence-led capability, one that allows institutions to respond to change, manage risk proactively and operate with confidence across jurisdictions.
To achieve this, institutions must rethink how compliance is structured, resourced and embedded into the fabric of financial operations. Those that do, and use the right tools in the process, will be better positioned to meet the demands of regulators today and in the future.
Business
Cultivating an Intuitive and Effective Security Culture
John Trest, Chief Learning Officer, VIPRE Security Group, explains how businesses can cultivate a security culture by overcoming security training barriers.
Research shows that human behavior remains the leading driver of data breaches — whether through stolen credentials, phishing attacks, misuse, or simple, inadvertent mistakes by well-meaning individuals. Under pressure, employees become susceptible to manipulation, and when confronted with the complexity of day-to-day work, human vulnerability becomes evident, which the bad actors actively look out for and take advantage of.
Cybersecurity culture
According to behavioural science, employees’ behaviour in the workplace is greatly influenced by the organisation’s existing culture. Whether it’s the successful implementation of technical controls, the likelihood of individuals reporting security incidents, or instances of accidental or malicious insider activity – they are all intricately linked to the cybersecurity culture.
Cybersecurity awareness is the first step to strengthening the human firewall
Good cybersecurity awareness training helps to embed a cybersecurity-conscious culture and security-first attitude in the workplace. Employees and organisations can establish stronger protective measures by enhancing cybersecurity consciousness. Rather than being seen as the “weakest link”, the human should be regarded as the critical defensive barrier for organisations.
With organisations facing increasing risks from social engineering attacks that manipulate behaviour and exploit human error, cybersecurity awareness and training equip employees with the ability to protect digital data from unauthorised access, and respond effectively to threats, countering intentional and unintentional security compromises.
Barriers to effective cybersecurity awareness training
Some key barriers typically impede the successful delivery of cybersecurity awareness training programs, jeopardising organisations’ security posture.
Poor employee engagement –When employees see training as boring or disconnected from their work, engagement suffers. Many security awareness programs compound this issue through complexity, excessive length, lack of relevant scenarios and imagery, and poor accessibility, creating barriers to participation and knowledge retention.
Lack of knowledge retention –Studies demonstrate that significant portions of newly learned information fade from memory rapidly, particularly when cybersecurity training occurs only annually. Such large breaks in training frequency create dangerous knowledge gaps that expose organisations to various security vulnerabilities.
Poor motivation –Cybersecurity training must inform and inspire employees to become active security defenders. Explaining the “why” effectively helps drive behavioural change through extrinsic motivation. However, addressing the “why me” question is crucial for developing more compelling intrinsic motivation. This personal context helps employees understand not just cybersecurity’s general importance, but its specific relevance to their workplace roles, themselves, and their loved ones. Intrinsic motivation is essential for lasting behavioural change and cultivating a truly security-conscious organisational culture. When employees personally connect with security practices, they transform from passive rule-followers to engaged protectors of company assets.
Content obsolescence –The dynamic evolution of security threats challenges cybersecurity awareness efforts, as today’s effective training may prove inadequate against tomorrow’s threats. When content becomes outdated, employees remain vulnerable to new attack techniques. Organisations must embrace continuous learning by implementing dynamic training programs that integrate seamlessly into employee workflows, incorporating emerging threats. By maintaining current, relevant training materials, organisations can ensure employees remain prepared to recognise and respond to evolving cybersecurity threats, ultimately preserving a robust security posture.
Undue focus on regulatory compliance –While regulatory compliance matters, it shouldn’t be the primary metric for cybersecurity awareness and training. Instead, programs should be evaluated by quantifiable improvements: reduced phishing clicks, increased reporting rates, fewer intrusions and breaches, decreased damage, and lower overall cyber risk.
Overcoming security awareness and training barriers
Adopting a more positive approach to security awareness, and viewing employees as positive assets that contribute to a cybersafe workplace, must be the goal. It helps to foster a positive culture in which employees feel more confident about their own actions when handling potential threats.
The cornerstone of engaging security training is twofold: convenience and relevance. When employees can easily access content that directly applies to their roles, they’re naturally more inclined to participate fully and retain critical information. This approach transforms security awareness from an obligatory task into a valuable, integrated part of the workday.
Some thoughts to help overcome the security awareness and training barriers:
Regular reinforcement and knowledge retention focus –To address retention challenges, implement training solutions featuring current, applicable, and engaging content. Incorporate evidence-based learning techniques including interactive elements, straightforward messaging, and real-life scenarios, to enhance retention of information and best practices.
Critical to long-term knowledge retention is the adoption of microlearning approaches. These methodologies divide security education into brief, compelling modules delivered frequently throughout the year. This short-form content helps keep the focus and maintains the attention of employees. By reinforcing key concepts shortly after initial exposure, microlearning creates multiple touchpoints that combat natural memory decay. In doing so, organisations transform cybersecurity awareness from an annual chore into an ongoing, sustainable practice that strengthens organisational security posture.
Gamification –Gamification can be a very useful tool in motivating learners to pay attention to and engage with a learning experience. Given the nature of how the human brain is stimulated by rewards in games, the knowledge obtained from this type of learning experience is retained for longer periods of time.
Though even simple gamification, such as points or leaderboards can have a positive impact on learners, the best way to leverage gamification elements into a learning experience is for them to be interwoven and inherent to the content. For instance, in a gamified cybersecurity scenario, players could assume the role of a White Hat hacker tasked with crafting convincing scam emails to fool unsuspecting staff. Players learn how cybercriminals operate and how to protect themselves by spending time in a hacker’s shoes. And the narrative built around the mechanics of the game makes the interactivities more relevant and compelling.
Role-specific training –Far too often, a broad-brush approach to cybersecurity training is used, making it less relevant for some staff. Targeted training designed for different workplace roles is more effective. For example, a company’s risk and compliance team needs cyber training that takes into account the demands of regulatory bodies, finance teams need to know about business email compromise, security teams must be trained on advances in threat detection, end users must understand how to spot a phishing email or deepfake, and so forth. Training that is tailored specially for business leaders is equally important.
Quality training –The quality of the training experience can make all the difference. Security awareness training is an expertise and a skill with adult learning trends, technology, and best practices. Specialist security trainers and instructional designers know how to get employees to engage with the program, based on an appreciation of employees’ intrinsic and extrinsic motivations, alongside their role-specific requirements.
Cybersecurity culture refers to the collective mindset and behaviours of an organisation’s employees toward protecting information assets. It involves integrating security practices into daily activities, fostering awareness and vigilance, and encouraging proactive reporting of incidents. It also reflects the unspoken beliefs towards security in the organisation. A strong cybersecurity culture is important to help reduce risks by making security a shared responsibility.
Why Resilience Is Replacing Prevention as the Defining Cybersecurity Strategy
Adapting compliance in a fragmented regulatory world
Why Shorter SSL/TLS Certificate Lifespans Are the Perfect Wake-Up Call for CIOs
How 5G and AI are shaping the future of eHealth
Combating Cyber Fraud in the Aviation Industry
