Connect with us

Technology

Why insurers must be on the lookout for ever-opportunistic cyber attackers

Source: Finance Derivative

By Paul Prudhomme, Head of Threat Intelligence Advisory at IntSights, a Rapid7 company

The insurance industry has long been a staple for cyber attacks. Criminals go where the money is, and the sector represents one of the most direct ways to access key personal and financial data that can be used to net an illicit profit.

More recently, insurers have faced even greater risk exposure due to their provision of cyber insurance coverage, particularly when it comes to ransomware. The sector has also seen increased attention from state-sponsored actors seeking personal data to fuel other campaigns.

Why is the insurance sector such a popular target for cyber crime?

Threat actors regard the insurance industry as a valuable source of personally identifiable information (PII) which can be used for a variety of crimes, including identity theft, other types of fraud, and further cyber attacks.

Alongside insurance documentation itself, firms will also have digital copies of items such as passports, driver’s licenses and bank statements that have been used to verify the policy holder’s identity and address. Birth dates are also particularly valuable to criminals, alongside National Insurance numbers, Social Security numbers, and their various international equivalents.

In one prominent example, U.S. insurer Ryan Specialty Group had its employee email accounts breached in April 2021. Customer names, Social Security numbers, driver’s license and passport details, and financial account details were believed to be exposed as a result.

The depth of information held by insurers on behalf of policyholders is also useful to state-sponsored threat actors, providing a large amount of data for human intelligence (HUMINT) operations or signals intelligence (SIGINT) operations.

Insurers that provide cyber insurance also face an elevated threat level. Attackers may seek to compromise their network to unearth policy details and security standards as a way of creating more effective targeted attacks.

The rising threat of ransomware

In addition to data theft, insurers are also targets for ransomware attacks. Ransomware has swiftly risen to become one of the primary cyber threats for businesses in all industries today as an infection can rapidly cripple the organisation by encrypting key files and systems. Criminals are also increasingly coupling ransom demands with data theft, often threatening to leak sensitive information unless additional payment demands are met.

However, insurers that provide cyber policies may again face increased risk from organised cyber criminal gangs and state-backed actors. In one prominent example, the Asian component of global cyber insurer AXA was struck by the Avaddon ransomware last year very shortly after announcing that it would stop reimbursing new French customers that chose to pay ransom demands.

The group responsible may have been seeking to make an example of AXA, as its previous policy of covering ransom payments would make it more likely for victims to pay up to criminals.

Why most stolen data is destined for the dark web

Stolen data is a commodity item in the shadow economy maintained by cyber criminals. Datasets are readily bought and sold on hidden forums and marketplaces on the dark web, with individuals and groups often specialising in selling data rather than using it themselves.

In one example discovered by IntSights security researchers, a Chinese-speaking criminal going by “Rebecca” was selling access to records from Chinese auto insurance companies for $3 each. These records included PII such as names, addresses, and driver’s license numbers.

Threat actors will commonly purchase PII sets from different sources to help facilitate further data theft and fraud. The insurance sector is a favourite target here as automated quote tools can potentially be exploited into revealing more information about customers. Farmers Insurance Group, for example, revealed that in early 2021, attackers attempted to use previously stolen customer names, dates of birth, and street addresses to trick its automated car insurance tool into providing driver’s license numbers.

Criminal groups now often include the threat of data disclosure as part of ransomware attacks. Defiant organisations that refuse to pay up will be punished by having their data sold on the dark web, or sometimes dumped on publicly available open web platforms. The threat aims to pile additional pressure on the victim by creating a high-profile breach that will damage customer trust and attract the attention of compliance regulators.

How can insurance firms protect themselves and their customers?

All firms operating in the insurance sector should be aware that they represent a high priority target to threat actors ranging from opportunistic criminals to highly organised gangs and even state-sponsored groups. Securing the customer data in their care should be a top priority for all insurance firms.

Insurers need to consider the context of their data and how best to protect it. B2C security measures will be significantly different from B2B equivalents, for example, and different subsectors such as auto and health insurance will also have their own security threats and priorities.

Threat intelligence is the most important asset for attempting to understand and mitigate these risks. Having access to a range of data from open and closed web sources will help insurers to build a picture of threats arrayed against them and prioritise their security strategies accordingly.

This includes insight into general trends, such as new attack tactics, malware variants, and software vulnerabilities, and can also reveal direct threats to the organisation. For example, threat intelligence might uncover discussions in a dark web forum about targeting a specific insurer because of their ransomware pay-out policy, or due to an exploit in their automated customer service system.

Effective threat intelligence can also alert insurers to the fact they have been breached by discovering criminals arranging the sale of stolen data. While the firm will still suffer reputational and financial damage, this warning can give them a chance to get ahead of the crisis.

The cyber threat landscape has become increasingly hostile for the insurance sector in recent years. In order to have the best chance of protecting both themselves and their customers, insurance providers should look to implement threat intelligence to understand the context of their data and mitigate threats accordingly.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Driving Business Transformation Through AI Adoption – A Roadmap for 2024

Author: Edward Funnekotter, Chief Architect and AI Officer at Solace

From the development of new products and services, to the establishment of competitive advantages, Artificial intelligence (AI) can fundamentally reshape business operations across industries. However, each organisation is unique and as such navigating the complexities of AI, while applying the technology in an efficient and effective way, can be a challenge.

To unlock the transformational potential of AI in 2024 and integrate it into business operations in a seamless and productive way, organisations should seek to follow these five essential steps:

  • Prioritise Data Quality and Quantity

Usefulness of AI models is directly correlated to the quantity and quality of the data used to train them, necessitating effective integration solutions and strong data governance practices. Organisations should seek to implement tools that provide a wealth of clean, accessible and high-quality data that can power quality AI.

Equally, AI systems cannot be effective if an organisation has data silos. These impede the ability for AI to digest meaningful data, and then provide the insights that are needed to drive business transformation. Breaking down data silos needs to be a business priority – with investment in effective data management, and an application of effective data integration solutions.

  • Develop your own unique AI platform

The development of AI applications can be a laborious process, impacting the value that businesses are gaining from them in the immediate term. This can be expedited by platform engineering, which modernises enterprise software delivery to facilitate digital transformation, optimising developer experience and accelerating the ability to deliver customer value for product teams. The use of platform engineering offers developers pre-configured tools, pre-built components and automated infrastructure management, freeing them up to tackle their main objective; building innovative AI solutions faster.

While the development of AI applications that can help streamline infrastructure, automate tasks, and provide pre-built components for developers is the end goal, it’s only possible if the ability to design and develop is there in the first place. Gartner’s prediction that Platform Engineering will come of age in 2024 is a particularly promising update.

  • Put business objectives at the heart of AI adoption – can AI deliver?

Any significant business change needs to be managed strategically, and with a clear indication of the aims and benefits they will bring. While a degree of experimentation is always necessary to drive business growth, these shouldn’t be at the expense of operational efficiency.

Before onboarding AI technologies, look internally at the key challenges that your business is facing and question “how can AI help to address this?” You may wish to enhance the customer experience, streamline internal processes or use AI systems to optimise internal decision-making. Be sure the application of AI is going to help, not hinder you on this journey

Also remember that AI remains in its infancy, and cannot be relied upon as a silver bullet for all operational challenges. Aim to build a sufficient base knowledge of AI capabilities today, and ensure these are contextualised within your own business requirements. This ensures that AI investments aren’t made prematurely, providing an unnecessary cost.

  1. Don’t be limited by legacy systems

Owing to the complex mix of legacy and/or siloed systems that organisations employ, they may be restricted in their ability to use real-time and AI-driven operations to drive business value. For example, IDC found that only 12% of organisations connect customer data across departments.

Amidst the ‘AI data rush’ there will be a greater need for event-driven integration, however, only an enterprise architecture pattern will ensure new and legacy systems are able to work in tandem. Without this, organisations will be prevented from offering seamless, real-time digital experiences, linking events across departments, locations, on-premises systems, IoT devices, in a cloud or even multi-cloud environment.

  • Leverage real-time technology

Keeping up with the real-time demands of AI can pose a challenge for legacy data architectures used by many organisations. Event mesh technology – an approach to distributed networks that enable real-time data sharing and processing – is a proven way of reducing these issues. By applying event-driven architecture (EDA), organisations can unlock the potential of real-time AI, with automated actions and informed decision making using relevant insights and automated actions.

By applying AI in this way, businesses can offer stronger, more personalised experiences – including the delivery of specialised offers, real-time recommendations and tailored support based on customer requirements. An example of this is in predictive maintenance, in which AI is able to analyse and anticipate future problems or business-critical failures, ahead of them affecting operations, and dedicate the correct resources to fix the issue, immediately. By implementing EDA as a ‘central nervous system’ for your data, not only is real-time AI possible, but adding new AI agents becomes significantly easier.

Ultimately, AI adoption needs to be strategic, avoiding chasing trends and focusing instead on how and where the technology can deliver true business value. Following the steps above, organisations can ensure they are leveraging the full transformative benefit of AI and driving business efficiency and growth in a data driven era.

AI can be a highly effective tool. However, its success is dependent on how it is being applied by organisations, strategically,  to meet clearly defined and specific business goals.

Continue Reading

Business

Securing The Future of Cybersecurity

Source: Finance Derivative

Dominik Samociuk, PhD, Head of Security at Future Processing

When more than 6 million articles of ancestry and genetic data were breached from 23 and Me’s secure database, companies were forced to confront and evaluate their own cybersecurity practices and data management. With approximately 2.39 million instances of cybercrime experienced across UK businesses last year, the time to act is now.

If even the most secure and unsuspecting businesses aren’t protected, then every business should consider themselves, and operate as a target. As we roll into 2024, it is unlikely there will be a reduction in cases like these. It is expected there will be an uptick in the methods and levels of sophistication employed by hackers to obtain sensitive data – something that continues to increase as a high-ticket commodity.

In the next two years, it is predicted that the cost of cyber damage will grow by 15% yearly, reaching a peak of $10.5 trillion in 2025. We won’t be saying goodbye to ransomware in 2024, but rather saying hello to an evolved, automated, adaptable, and more intelligent form of it. But what else is expected to take the security industry by storm in the near future?

Offensive vs. Defensive Use of AI in Cybersecurity

Cybersecurity is a symbiotic cycle for companies. From attack to defence, an organisation’s security experts must be constantly defensive against malicious attacks. In 2024, there will be a rise in the use of Generative AI with an alarming 70% of workers using ChatGPT not making their employers aware – opening the door for significant security issues, especially for outsourced tasks like coding. And while its uses are groundbreaking, Gen AI’s misuses, especially when it comes to cybersecurity, are cause for concern.

Cybersecurity breaches will come from more sophisticated sources this year. As artificial intelligence (AI) continues to surpass development expectations, systems that can analyse and replicate humans are now being employed. With platforms like LOVO AI, and Deepgram making their way into mainstream use – often for hoax or ruse purposes – sinister uses of these platforms are being employed by cybercriminals to trick unsuspecting victims into disclosing sensitive network information from their business or place of work.

Cybercriminals target the weakest part of any security operation – the people – by encouraging them to divulge personal and sensitive information that might be used to breach internal cybersecurity. Further, Generative AI platforms like ChatGPT can  be used to automate the production of malicious code introduced internally or externally to the network. On the other hand, AI is being used to strengthen cybersecurity in unlikely ways. Emulating a cinematic cyber-future, AI can be used for the detection of malware and abnormal system/ or user activity to alert human operators. It can then equip staff with the tools and resources needed to respond in these instances.

Fatally, like any revolutionary platform, AI produces hazards and opportunities for misuse and exploitation. Seeing a rise in alarming cases of abuse, cybersecurity experts must consider the effect these might have before moving forward with an adaptable strategy for the year.

Data Privacy, Passkeys, and Targeting Small Businesses

Cybercriminals using their expertise to target small businesses is expected to increase in 2024. By nature, small businesses are unlikely to operate at a level able to employ the resources needed to combat consistent cybersecurity threats that larger organisations face on a daily basis. Therefore, with areas of cybersecurity unaccounted for, cybercriminals are likely to increasingly exploit vulnerabilities within small business networks.

They may also exploit the embarrassment felt by small business owners on occasions like these. If their data is being held for ransom, a small business owner, without the legal resources needed to fight (or tidy up) a data breach is more likely to give in to the demands of an attacker to save face, often setting them back thousands of pounds. Regular custom, loyalty, trust, and reputation makes or breaks a small business. Even the smallest data breaches can, in one fell swoop, lay waste to all of these.

Unlikely to have dedicated cybersecurity teams in place, a small business will often employ less secure and inexpensive data management solutions – making them prime targets. Contrary to expectations, in 2024, we will not say goodbye to the employment of ransomware. In fact, these tools are likely to become more common for larger, well-insured companies due to gold-rush on data harvesting.

Additionally, changing passwords will become a thing of the past. With companies like Apple beta-testing passkeys in consumer devices and even Google describing them as ‘the beginning of the end of the password’, businesses will no doubt begin to adopt this more secure technology, stored on local devices, for any systems that hold sensitive data. Using passwordless forms of identification mitigates issues associated with cyber criminals’ common method of exploiting personal information for unauthorised access.

Generative AI’s Impact on Information Warfare and Elections

In 2024, more than sixty countries will see an election take place, and as politics barrel towards all-out war in many, it is more important than ever to safeguard cybersecurity to account for a tighter grip on fact-checked information and official government communications. It is likely that we will see a steep rise in Generative AI supported propaganda on social media.

In 2016, amidst the heat of a combative and unfriendly US Presidential election, republican candidate Donald Trump popularised the term ‘Fake News’, which eight years later continues to plague realms of the internet in relation to ongoing global events. It was estimated that 25% of tweets sampled during this time, related to the election, contained links to intentionally misleading or false news stories in an attempt to further a viewpoint’s popularity. Online trust comes hand-in-hand with security, without one the other cannot exist.

While in 2016, the contemporary use of AI was extremely limited in today’s terms, what becomes of striking concern is the access members of the public have to platforms where, at will, they can legitimise a controversial viewpoint, or ‘fake news’ by generating video or audio clips of political figures, or quotes and news articles with a simple request. The ability to generate convincing text and media can significantly influence public opinion and sway electoral processes, destabilising a country’s internal and external cybersecurity.

Of greatest concern is the unsuspecting public’s inability to identify news generated by AI. Cornell University found that people were tricked into finding new false articles generated by AI credible over two-thirds of the time. Further studies found that humans were unable to identify articles written by ChatGPT beyond a level of random chance. As Generative AI’s sophistication increases, it will become ever more difficult to identify what information is genuine and safeguard online security. This is critical as Generative AI can now be used as ammunition in information warfare through the spread of hateful, controversial, and false propaganda during election periods.

In conclusion, the near future, like 2023, will see a great shift in focus toward internal security. A network is at its most vulnerable when the people who run it aren’t aligned in their strategies and values. Advanced technologies, like AI and ransomware, will continue to be a rising issue for the industry, and not only destabilise networks externally, but internally, too, as employees are unaware of the effects using such platforms might have.

Continue Reading

Business

Developing a personalised roadmap for implementing best practices in AI governance

Colin Redbond, SVP Product Strategy, SS&C Blue Prism

Whether its customer chatbots or digital workers improving workflow, daily use of Artificial Intelligence (AI) and Intelligent Automation (IA) is under scrutiny.

With automation rising, nations are grappling with the ethical, legal, and societal implications and crafting AI governance laws, requiring business leaders to also prepare for these far-reaching changes.

The proposed EU’s AI act – considered the world’s first comprehensive law safeguarding the rights of users – is expected to regulate the ever-evolving needs of AI application developers in the EU and beyond.

Transparency and authenticity significantly influence brand perception, particularly among Gen Z – 32% of the global population. With high expectations, they only support brands aligned with their values.

Banks, auditors, and insurers, and their supply chains – adept at meeting legislation like Europe’s GDPR and Sarbanes Oxley in the U.S. – necessitate a similar approach with AI. Governance will influence everything from governments, robot manufacturing and back-office apps to healthcare teams using AI for data extraction.

Non-compliance could be substantial, with the EU suggesting fines of €30 million or six percent of global annual turnover, whichever is higher, so identifying AI integration points, workflows and risks is vital.

SS&C Blue Prism, through industry collaboration, offers guidance on governance roadmaps, ensuring businesses are well-prepared to meet evolving requirements while leveraging AI effectively.

Need for immediate action
The legislation also scrutinises automations, ensuring compliance as they innovate automated tasks, BPM data analysis, and business-driven automations. IA, with its auditable digital trail, becomes an ideal vehicle, providing transparent insights into actions and decisions, safeguarding record-keeping and documentation – crucial across the AI lifecycle.

Establishing and maintaining AI governance also fosters ethical and transparent practices with executives to employees, ensuring compliance, security, and alignment with organisational values, including:

  • Top-down: Executive sponsorship ensures governance, data quality, security, and management, with accountability. An audit committee oversees data control, supported by a chief data officer.
  • Bottom-up: Individual teams take responsibility for the data security, modelling, and tasks they manage to ensure standardisation, scalability enabling scalability.
  • Modelling: Effective governance continuously monitors and updates performance to align with organisational goals, prioritising security in granting access.
  • Transparency: Tracking AI performance ensures transparency and aids in risk management, involving stakeholders from across the business.

Frameworks for AI governance
Though standards are evolving, disregarding governance risks data leakage, fraud, and privacy law breaches so compliance and standardisation must be prioritised.

Governments, companies, and academia are collaborating to establish responsible guidelines and frameworks. There are several real-world examples of AI governance that – while they differ in approach and scope – address the implications of artificial intelligence. Extracts from a few notable ones are here:

The EU’s GDPR – not exclusively focused on AI – includes data protection and privacy provisions related to AI systems. Additionally, the Partnership on AI and Montreal Declaration for Responsible AI – developed at the International Joint Conference on Artificial Intelligence – focus on research, best practices, and open dialogue in AI development.

Tech firms like Google, Microsoft, IBM, and Amazon have created AI ethics guidelines, emphasising social good, harm avoidance, and fairness, while some countries have developed national AI strategies including governance.

Canada’s “Pan-Canadian AI Strategy” prioritises responsible AI development for societal benefit, focusing on ethics, transparency, and accountability. Establishing governance in your organisation involves processes, policies, and practices for AI’s responsible development, deployment, and use.

Reach governance greatness in 14 Steps
Government and companies using AI must incorporate risk and bias checks in mandatory system audits. Alongside data security and forecasting, organisations can adopt strategic approaches to establish AI governance.

  • Development guidelines: Establish a regulatory framework and best practices for AI model development, including data sources, training, and evaluation techniques. Craft guidelines based on predictions, risks, and use cases.
  • Data management: Ensure that the data used to train and fine-tune AI models is accurate and compliant with privacy and regulatory requirements.
  • Bias mitigation: Incorporate ways to identify and address bias in AI models to ensure fair and equitable outcomes across different demographic groups.
  • Transparency: Require AI models to provide explanations for their decisions, especially in highly regulated sectors such as healthcare, finance and legal systems.
  • Model validation and testing: Conduct thorough validation and testing of AI models to ensure they perform as intended and meet quality benchmarks.
  • Monitoring: Continuously monitor AI model performance metrics, updating to meet changing needs and safety regulations. Due to generative AI’s novelty, maintain human oversight to validate quality and performance.
  • Version control: Keep track of the different versions of your AI models, along with their associated training data, configurations, and performance metrics so you can reproduce or scale them as needed.
  • Risk management: Implement security practices to protect AI models from cybersecurity attacks, data breaches and other security risks.
  • Documentation: Maintain documentation of the entire AI lifecycle, including data sources, testing, and training, hyperparameters and evaluation metrics.
  • Training and Awareness: Provide training to employees about AI ethics, responsible AI practices, and the potential societal impacts of AI technologies. Raise awareness about the importance of AI governance across the organisation.
  • Governance board: Establish a governance board or team overseeing AI model development, deployment and compliance with established guidelines that fit your goals. Crucially, involve all levels of the workforce — from leadership to employees working with AI — to ensure comprehensive and inclusive input.
  • Regular auditing: Conduct audits to assess AI model performance, algorithm regulation compliance and ethical adherence.
  • User feedback: Provide mechanisms for users and stakeholders to provide feedback on AI model behaviour and establish accountability measures in case of model errors or negative impacts.
  • Continuous improvement: Incorporate lessons learned from deploying AI models into the governance process to continuously improve the development and deployment practices.

AI governance demands continuous commitment from leadership, alignment with organisational values, and adaptability to technological and societal changes. A well-planned governance strategy is essential for organisations using automation, ensuring compliance.

Establishing safety regulations and governance policies is vital to maintaining the security, accuracy, and compliance of your data. These steps can help ensure your organisation develops and deploys AI responsibly and ethically.

Continue Reading

Copyright © 2021 Futures Parity.