Connect with us

Technology

WHY RANSOMWARE READINESS IN THE FINANCE SECTOR IS CRITICAL

Source: Finance Derivative

By Piers Wilson, Head of Product Management at Huntsman Security

Ransomware attacks have been making headlines recently. From AXA to CNA Financial, no part of the finance sector is impervious to the risks. For many organisations, initial worries focus on the logistics and the cost of a ransom, however, the wider damage and costs increasingly relate to rectification, revenue loss and reputational damage. Attacks, such as in the Kaseya case, have also shown the increasing risks that “trusted” service providers and 3rd party supply chain participants can bring – multiplier effects that can quickly  impact one million endpoints, with a ransom set at US$70m.

The network effect in the financial services sector benefits all stakeholders – from institutions to consumers. The increase in shared data and services, however, compounds the risks of successful cyber attacks. And, as we have seen with the impact of ransomware on pipelines and even food processors, the impact on organisations, and individuals, of being locked out of systems is huge. If customers cannot access funds or transact with service providers across the supply chain, anxiety and costs can escalate and commercial reputations quickly trashed.

An easy way out?

Businesses might have once seen the payment of a ransom as a potential ‘quick fix’ to the problem of ransomware attacks. This option, however, is now likely to become a thing of the past as bans on ransom payments are being contemplated in France and in the US by the SEC and OFAC. . In Australia, there are calls for mandatory notifications of ransom payments by ransomware victims.

Finance sector organisations also need to consider that even when ransoms are paid, the decryption process and returning to business as usual can be so slow that the ability to reinstate operations from their own internal backups and security safeguards can be achieved in the same time. As the scale of attacks and disruption of those impacted by supply chain ransomware attacks escalates, the message is increasingly that time is of the essence. If you can’t trust the decryption key from an attacker, then you are best advised to invest your time and effort in reconstructing, reconfiguring and securing your IT systems and services from the ground up so as to be confident in their integrity.

Despite the possibility that the payment of ransoms will become unlawful, cyber insurance will remain an effective tool for organisations to fund the process of getting back up and running quickly and reducing disruption. Insurers are demanding that prior to issuing a cyber policy, organisations must now show evidence of their having adequate cyber security controls in place. In fact, growing ransomware threats make it likely that insurance premiums will increase even further, so getting verifiable cyber risk management capabilities in place is likely to move even further up the list of board priorities.

A challenging environment

The financial sector also faces some other more particular challenges. Many financial institutions hold vast amounts of personal data, whether on accounts, transactions, users or reports. Complicating this is open banking legislation, like PSD2 in the UK/EU and CDR in Australia, which requires that the process of customer approved sharing of their personal data, is easy and accessible. These rights for consumers to have their personal information held and transmitted between financial sector participants will necessarily redistribute the responsibilities for cyber security in the sector and as a result, increase the levels of cyber security risk during this period of adjustment to a changing environment.

The financial services sector is already – and indeed, always has been – an attractive target for criminals at all levels. The requirement that customers have greater control over access to their data adds the requirement for whole new level of ransomware readiness. Organisations could face anything from disgruntled employees, to fraud, to criminal ransomware attacks seeking to enable the wholesale theft of personal data. The stakes couldn’t be higher; so what can the sector do to protect itself?

Preparing for ransomware attacks

Putting in place anti-virus software and network defences – alongside the rise of endpoint detection and response – can certainly help manage attacks. But these solutions rely on detecting malicious activity in the first place. What if your endpoint or network solution misses the attack, without warning? Do you have visibility into what’s happening? Are there other controls in place that can mitigate the threat? Are they monitored and managed as part of an IT risk management program?

More attention must be given to preventing or at least limiting successful ransomware attacks before they do serious damage.  Getting the basic cyber security controls in place and working to protect recognised threat vectors, really pays dividends as these are precisely the weaknesses that ransomware attackers are likely to exploit.

There are three areas to focus on. The first two are the prevention of any initial infection and containment or limitation of the spread if one does occur. These strategies need to be coupled to a third, recovery, which ensures systems and data can be restored and an incident can be successfully managed. The core principles of effective risk management apply – identify and triage the risks and manage them accordingly.

There are some key safeguards organisations can adopt to support each of these elements:

Prevention

  • Application control – ensuring only approved software can run on a computer system, securing systems by limiting what they can execute.
  • Application patching – applications must be regularly updated to prevent intruders using known vulnerabilities in software.
  • Macro security – checking that macro and document settings are correctly configured and to prevent the activation of malicious code.
  • Harden user applications and browsers – use effective security policies to limit user access to active content and web code.
  • Firewalls/perimeter – and even physical on-site security – limit user access outbound and remote connections inbound.
  • Staff awareness – while not a technical control, building a “cyber culture” and a better understanding by staff of cyber security, the threats and mitigation strategies that can minimise cyber attacks, is vital.

Containment

  • Restrict administrative privileges – limit admin privileges by allowing only those staff needing to access systems to do so, and then solely for specified purposes and within controlled access.
  • Operating system patching – fully patched operating systems will significantly reduce the likelihood of malware or ransomware spreading across the network from system to system.
  • Multi-factor authentication – used to manage user access to highly sensitivity accounts and systems (including remote users).
  • Endpoint protection – install anti-virus software and keep it updated.

Recovery

  • Regular backups – secure data and system backups off-site and test your recovery processes.
  • Incident response – in planning for a worst case scenario make sure everyone is well versed in the incident management playbook.

Gaining assurance in controls

Businesses must make sure they are monitoring their security controls to ensure that they are working effectively. If one control is ineffective, the IT teams need to know quickly to mitigate any shortcomings and reinstate an adequate cyber posture. A “cyber security culture” that ensures these risks are a board level issue will improve overall corporate ransomware preparedness.

The board should receive reports that provide clear visibility of these controls, and leverage these KPIs as part of their cyber security risk management process. They can be used as part of a continuous cyber security improvement program. Being able to monitor readiness and assess the risk of attack provides early warning defence and confirmation that cyber security risk management processes are in hand.

Summary

The financial services sector faces many challenges when it comes to putting in place comprehensive cyber security risk management practices. If a bank or insurer was affected by a significant ransomware attack, the wider implications on the economy could be significant. Recent fuel shortages resulting from the Colonial Pipeline incident gave us a glimpse of the resulting widespread public panic and concern. It was reminiscent of the run on Northern Rock bank branches in the UK in 2007, at the start of the financial crisis. It doesn’t take much to imagine the level of public panic that would ensue if a massive ransomware attack locked consumers out from accessing their funds.

Organisations in the sector must have comprehensive cyber defences and controls, backed up by regular monitoring to make sure they are working effectively, and ensure that if one control fails to identify or prevent an attack, other complementary controls are operational and able to limit its impact.

That way the risk of a successful attack can be minimised, and organisations can maintain effective IT governance to better prevent costly disruption to their systems, operations and reputations.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Technology

Ethical AI: Preparing Your Organisation for the Future of AI

Rosemary J Thomas, Senior Technical Researcher, AI Labs Version 1

Artificial intelligence is changing the world, generating countless new opportunities for organisations and individuals. Conversely, it also poses several known ethical and safety risks, such as bias, discrimination, privacy violations, alongside its potential to negatively impact society, well-being, and nature. It is therefore fundamental that this groundbreaking technology is approached with an ethical mindset, adapting practices to make sure it is used in a responsible, trustworthy, and beneficial way.

To achieve this, first we need to understand what an ethical AI mindset is, why it needs to be central, and how we can establish ethical principles and direct behavioural changes across an organisation. We must then develop a plan to steer ethical AI from within and be prepared to take liability for the outcomes of any AI system.

What is an ethical AI mindset

An ethical AI mindset is one that acknowledges the technology’s influence on people, society, and the world, and understands its potential consequences. It is based on the perception that AI is a dominant force that can sculpt the future of humankind. An ethical AI mindset ensures AI is allied with human principles and goals, and that it is used to support the common good and the ethical development of all.

It is not only about preventing or moderating the adverse effects of AI, but also about exploiting its immense capability and prospects. This includes developing and employing AI systems that are ethical, safe, fair, transparent, responsible, and inclusive, and that respect human values, autonomy, and diversity. It also means ensuring that AI is open, reasonably priced, and useful for everyone – especially the most susceptible and marginalised clusters in our society.

Why you need an ethical AI mindset

Functioning with an ethical AI mindset is essential[1].  Not only because it is the right thing to do, but also because it is expected, with research showing customers are far less likely to buy from unethical establishments. As AI evolves, the expectation for businesses to use it responsibly will continue to grow.

Adopting an ethical AI mindset can also help in adhering to current, and continuously developing, regulation and guidelines. Governing bodies around the world are establishing numerous frameworks and standards to make sure AI is used in an ethical and safe way and, by creating an ethical AI mindset, we can ensure AI systems meet these requirements, and prevent any prospective fines, penalties, or court cases.

Additionally, the right mindset will promote the development of AI systems that are more helpful, competent, and pioneering. By studying the ethical and social dimensions of AI, we can invent systems that are more aligned with the needs, choices, and principles of our customers and stakeholders, and can provide moral solutions and enhanced user experiences.

Ethical AI as the business differentiator

Fostering an ethical AI mindset is not a matter of singular choice or accountability, it is a united, organisational undertaking. To integrate an ethical culture and steer behavioural changes across the business, we need to take a universal and methodical approach.

It is important that the entire workforce, including executives and leadership, are educated on the need for AI ethics and its use as a business differentiator[2]. To achieve this, consider taking a mixed approach to increase awareness across the company, using mediums such as webinars, newsletters, podcasts, blogs, or social media. For example, your company website can be used to share significant examples, case studies, best practices, and lessons learned from around the globe where AI practices have effectively been implemented. In addition, guest sessions with researchers, consultants, or even collaborations with academic research institutions can help to communicate insights and guidance on AI ethics and showcase it as a business differentiator.

It is also essential to take responsibility for the consequences of any AI system that is developed for practical applications, despite where organisations or products sits in the value chain. This will help build credibility and transparency with stakeholders, customers, and the public.

Evaluating ethics in AI

We cannot monitor or manage what we cannot review, which is why we must establish a method of evaluating ethics in AI. There are a number of tools and systems than can be used to steer ethical AI, which can be supported by ethical AI frameworks, authority structures and the Ethics Canvas.

An ethical AI framework is a group of values and principles that acts as a handbook for your organisation’s use of AI. This can be adopted, adapted, or built to suit your organisation’s own goals and values, with the stakeholders involved in its creation. An example of this can be seen in the UK Government’s Ethical AI Framework[3], and the Information Commissioner’s Office’s AI and data protection risk toolkit[4] which covers all ethical risks in the lifecycle stages – from business requirements and design to deployment and monitoring for AI systems.

An ethical AI authority structure is a group of roles, obligations and methods that make sure your ethical AI framework is followed and reviewed. You can establish an ethical AI authority structure that covers several aspects and degrees of your organisation and delegates clear obligations to each stakeholder.

The Ethics Canvas can be used in AI engagements to help build AI systems with ethics integrated into development. It helps teams identify potential ethical issues that could arise from the use of AI and develop guidelines to avoid them. It also promotes transparency by providing clear explanations of how the technology works and how decisions are made and can further increase stakeholder engagement to gather input and feedback on the ethical aspects of the AI project. This canvas helps to structure risk assessment and can serve as a communication tool to convey the organisation’s commitment to ethical AI practices.

Ethical AI implications

Any innovation process, whether it involves AI or not, can be marred a fear of failure and the desire to be successful in the first attempt. But failures should be regarded as lessons and used to improve ethical experiences in AI.

To ensure AI is being used responsibly, we need to identify what ethics means in the context of our business operations. Once this has been established, we can personalise our message to the target stakeholders, staying within our own definition of ethics and including the use of AI within our organisation’s wider purpose, mission, and vision.

In doing so, we can draw more attention towards the need for responsible use policies and an ethical approach to AI, which will be increasingly important as the capabilities of AI evolve, and its prevalence within businesses continues to grow.


[1] https://www.mckinsey.com/featured-insights/in-the-balance/from-principles-to-practice-putting-ai-ethics-into-action

[2] https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1258721/full

[3] https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety

[4] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ai-and-data-protection-risk-toolkit/

Continue Reading

Business

Driving Business Transformation Through AI Adoption – A Roadmap for 2024

Author: Edward Funnekotter, Chief Architect and AI Officer at Solace

From the development of new products and services, to the establishment of competitive advantages, Artificial intelligence (AI) can fundamentally reshape business operations across industries. However, each organisation is unique and as such navigating the complexities of AI, while applying the technology in an efficient and effective way, can be a challenge.

To unlock the transformational potential of AI in 2024 and integrate it into business operations in a seamless and productive way, organisations should seek to follow these five essential steps:

  • Prioritise Data Quality and Quantity

Usefulness of AI models is directly correlated to the quantity and quality of the data used to train them, necessitating effective integration solutions and strong data governance practices. Organisations should seek to implement tools that provide a wealth of clean, accessible and high-quality data that can power quality AI.

Equally, AI systems cannot be effective if an organisation has data silos. These impede the ability for AI to digest meaningful data, and then provide the insights that are needed to drive business transformation. Breaking down data silos needs to be a business priority – with investment in effective data management, and an application of effective data integration solutions.

  • Develop your own unique AI platform

The development of AI applications can be a laborious process, impacting the value that businesses are gaining from them in the immediate term. This can be expedited by platform engineering, which modernises enterprise software delivery to facilitate digital transformation, optimising developer experience and accelerating the ability to deliver customer value for product teams. The use of platform engineering offers developers pre-configured tools, pre-built components and automated infrastructure management, freeing them up to tackle their main objective; building innovative AI solutions faster.

While the development of AI applications that can help streamline infrastructure, automate tasks, and provide pre-built components for developers is the end goal, it’s only possible if the ability to design and develop is there in the first place. Gartner’s prediction that Platform Engineering will come of age in 2024 is a particularly promising update.

  • Put business objectives at the heart of AI adoption – can AI deliver?

Any significant business change needs to be managed strategically, and with a clear indication of the aims and benefits they will bring. While a degree of experimentation is always necessary to drive business growth, these shouldn’t be at the expense of operational efficiency.

Before onboarding AI technologies, look internally at the key challenges that your business is facing and question “how can AI help to address this?” You may wish to enhance the customer experience, streamline internal processes or use AI systems to optimise internal decision-making. Be sure the application of AI is going to help, not hinder you on this journey

Also remember that AI remains in its infancy, and cannot be relied upon as a silver bullet for all operational challenges. Aim to build a sufficient base knowledge of AI capabilities today, and ensure these are contextualised within your own business requirements. This ensures that AI investments aren’t made prematurely, providing an unnecessary cost.

  1. Don’t be limited by legacy systems

Owing to the complex mix of legacy and/or siloed systems that organisations employ, they may be restricted in their ability to use real-time and AI-driven operations to drive business value. For example, IDC found that only 12% of organisations connect customer data across departments.

Amidst the ‘AI data rush’ there will be a greater need for event-driven integration, however, only an enterprise architecture pattern will ensure new and legacy systems are able to work in tandem. Without this, organisations will be prevented from offering seamless, real-time digital experiences, linking events across departments, locations, on-premises systems, IoT devices, in a cloud or even multi-cloud environment.

  • Leverage real-time technology

Keeping up with the real-time demands of AI can pose a challenge for legacy data architectures used by many organisations. Event mesh technology – an approach to distributed networks that enable real-time data sharing and processing – is a proven way of reducing these issues. By applying event-driven architecture (EDA), organisations can unlock the potential of real-time AI, with automated actions and informed decision making using relevant insights and automated actions.

By applying AI in this way, businesses can offer stronger, more personalised experiences – including the delivery of specialised offers, real-time recommendations and tailored support based on customer requirements. An example of this is in predictive maintenance, in which AI is able to analyse and anticipate future problems or business-critical failures, ahead of them affecting operations, and dedicate the correct resources to fix the issue, immediately. By implementing EDA as a ‘central nervous system’ for your data, not only is real-time AI possible, but adding new AI agents becomes significantly easier.

Ultimately, AI adoption needs to be strategic, avoiding chasing trends and focusing instead on how and where the technology can deliver true business value. Following the steps above, organisations can ensure they are leveraging the full transformative benefit of AI and driving business efficiency and growth in a data driven era.

AI can be a highly effective tool. However, its success is dependent on how it is being applied by organisations, strategically,  to meet clearly defined and specific business goals.

Continue Reading

Business

Securing The Future of Cybersecurity

Source: Finance Derivative

Dominik Samociuk, PhD, Head of Security at Future Processing

When more than 6 million articles of ancestry and genetic data were breached from 23 and Me’s secure database, companies were forced to confront and evaluate their own cybersecurity practices and data management. With approximately 2.39 million instances of cybercrime experienced across UK businesses last year, the time to act is now.

If even the most secure and unsuspecting businesses aren’t protected, then every business should consider themselves, and operate as a target. As we roll into 2024, it is unlikely there will be a reduction in cases like these. It is expected there will be an uptick in the methods and levels of sophistication employed by hackers to obtain sensitive data – something that continues to increase as a high-ticket commodity.

In the next two years, it is predicted that the cost of cyber damage will grow by 15% yearly, reaching a peak of $10.5 trillion in 2025. We won’t be saying goodbye to ransomware in 2024, but rather saying hello to an evolved, automated, adaptable, and more intelligent form of it. But what else is expected to take the security industry by storm in the near future?

Offensive vs. Defensive Use of AI in Cybersecurity

Cybersecurity is a symbiotic cycle for companies. From attack to defence, an organisation’s security experts must be constantly defensive against malicious attacks. In 2024, there will be a rise in the use of Generative AI with an alarming 70% of workers using ChatGPT not making their employers aware – opening the door for significant security issues, especially for outsourced tasks like coding. And while its uses are groundbreaking, Gen AI’s misuses, especially when it comes to cybersecurity, are cause for concern.

Cybersecurity breaches will come from more sophisticated sources this year. As artificial intelligence (AI) continues to surpass development expectations, systems that can analyse and replicate humans are now being employed. With platforms like LOVO AI, and Deepgram making their way into mainstream use – often for hoax or ruse purposes – sinister uses of these platforms are being employed by cybercriminals to trick unsuspecting victims into disclosing sensitive network information from their business or place of work.

Cybercriminals target the weakest part of any security operation – the people – by encouraging them to divulge personal and sensitive information that might be used to breach internal cybersecurity. Further, Generative AI platforms like ChatGPT can  be used to automate the production of malicious code introduced internally or externally to the network. On the other hand, AI is being used to strengthen cybersecurity in unlikely ways. Emulating a cinematic cyber-future, AI can be used for the detection of malware and abnormal system/ or user activity to alert human operators. It can then equip staff with the tools and resources needed to respond in these instances.

Fatally, like any revolutionary platform, AI produces hazards and opportunities for misuse and exploitation. Seeing a rise in alarming cases of abuse, cybersecurity experts must consider the effect these might have before moving forward with an adaptable strategy for the year.

Data Privacy, Passkeys, and Targeting Small Businesses

Cybercriminals using their expertise to target small businesses is expected to increase in 2024. By nature, small businesses are unlikely to operate at a level able to employ the resources needed to combat consistent cybersecurity threats that larger organisations face on a daily basis. Therefore, with areas of cybersecurity unaccounted for, cybercriminals are likely to increasingly exploit vulnerabilities within small business networks.

They may also exploit the embarrassment felt by small business owners on occasions like these. If their data is being held for ransom, a small business owner, without the legal resources needed to fight (or tidy up) a data breach is more likely to give in to the demands of an attacker to save face, often setting them back thousands of pounds. Regular custom, loyalty, trust, and reputation makes or breaks a small business. Even the smallest data breaches can, in one fell swoop, lay waste to all of these.

Unlikely to have dedicated cybersecurity teams in place, a small business will often employ less secure and inexpensive data management solutions – making them prime targets. Contrary to expectations, in 2024, we will not say goodbye to the employment of ransomware. In fact, these tools are likely to become more common for larger, well-insured companies due to gold-rush on data harvesting.

Additionally, changing passwords will become a thing of the past. With companies like Apple beta-testing passkeys in consumer devices and even Google describing them as ‘the beginning of the end of the password’, businesses will no doubt begin to adopt this more secure technology, stored on local devices, for any systems that hold sensitive data. Using passwordless forms of identification mitigates issues associated with cyber criminals’ common method of exploiting personal information for unauthorised access.

Generative AI’s Impact on Information Warfare and Elections

In 2024, more than sixty countries will see an election take place, and as politics barrel towards all-out war in many, it is more important than ever to safeguard cybersecurity to account for a tighter grip on fact-checked information and official government communications. It is likely that we will see a steep rise in Generative AI supported propaganda on social media.

In 2016, amidst the heat of a combative and unfriendly US Presidential election, republican candidate Donald Trump popularised the term ‘Fake News’, which eight years later continues to plague realms of the internet in relation to ongoing global events. It was estimated that 25% of tweets sampled during this time, related to the election, contained links to intentionally misleading or false news stories in an attempt to further a viewpoint’s popularity. Online trust comes hand-in-hand with security, without one the other cannot exist.

While in 2016, the contemporary use of AI was extremely limited in today’s terms, what becomes of striking concern is the access members of the public have to platforms where, at will, they can legitimise a controversial viewpoint, or ‘fake news’ by generating video or audio clips of political figures, or quotes and news articles with a simple request. The ability to generate convincing text and media can significantly influence public opinion and sway electoral processes, destabilising a country’s internal and external cybersecurity.

Of greatest concern is the unsuspecting public’s inability to identify news generated by AI. Cornell University found that people were tricked into finding new false articles generated by AI credible over two-thirds of the time. Further studies found that humans were unable to identify articles written by ChatGPT beyond a level of random chance. As Generative AI’s sophistication increases, it will become ever more difficult to identify what information is genuine and safeguard online security. This is critical as Generative AI can now be used as ammunition in information warfare through the spread of hateful, controversial, and false propaganda during election periods.

In conclusion, the near future, like 2023, will see a great shift in focus toward internal security. A network is at its most vulnerable when the people who run it aren’t aligned in their strategies and values. Advanced technologies, like AI and ransomware, will continue to be a rising issue for the industry, and not only destabilise networks externally, but internally, too, as employees are unaware of the effects using such platforms might have.

Continue Reading

Copyright © 2021 Futures Parity.