Connect with us


Why baselining security is key to improving cyber hygiene

Phil Robinson, Principal Consultant at Prism Infosec

Poor cyber hygiene remains a major cause of security breaches. The National Cyber Security Centre (NCSC) Annual Review 2023 revealed that the highest proportion of incidents it had dealt with this year were the result of the exploitation of unpatched common vulnerabilities and exposures (CVEs) affecting public-facing applications which could have been prevented through better cyber hygiene.

But what is cyber hygiene? There’s no strict definition, although the general consensus is that it’s a number of simple routine measures adopted to secure sensitive data and minimise risk from cyber threats. As most cyber threats are relatively unsophisticated, adopting these measures can prove highly effective. In the case of the CVEs mentioned above, effective patch management (an integral part of ensuring good cyber hygiene) would have seen critical updates  prioritised and applied, potentially reducing the risk of compromise.

The most common measures adopted, according to the Cyber Security Breaches Survey 2023 government report, are keeping malware protection updated (ie anti-virus), backing up to the cloud, password management, restricting administrative access rights, and using network firewalls, with two thirds of businesses having these in place, although staff training should also be included here to mitigate the insider threat.

Is cyber hygiene getting worse?

However, the report notes that there has been a consistent decline in some areas of cyber hygiene across the last three waves of the survey. The use of password policies fell from 79% in 2021 to 70% in 2023, deployment of network firewalls from 78% to 66% (although this in practice could be due to an increased prevalence of cloud computing and deployment of Zero Trust Network Architecture), restricting administrative rights from 75% to 67%, and policies to apply software security updates within 14 days fell from an already low 43% to 31% (this was even more marked among the retail and wholesale sector where the rate fell from 41% to 29%). In addition, only 18% of businesses had instructed staff in the form of security awareness training over the course of the year.

The shift has occurred in the micro and SME sectors, although among medium businesses the number placing security controls on their devices dropped sharply (from 91% to 79%) as did agreed processes for phishing emails (from 86% to 78%). When adding to this the economic pressures which have seen these businesses cut back resources, it is clear that the downward spiral may well be set to continue, leaving these smaller businesses particularly vulnerable to attack. So, what can they do to improve security practices and reduce the likelihood of compromise?

One of the easiest ways to improve cyber hygiene is to implement an approach based on compiance with an existingbaseline cyber security standard. There are a number of particular standards and guidance that can be used, such as: Cyber Essentials (CE and CE+), ISO 27001 (and more wider the ISO27000 series) as well as the NIST Cybersecurity Framework (CSF).

Awareness of these standards is still relatively low, with only 14% saying they had heard of CE, 9% adhering to ISO27001 and 3% to working with the NIST standard but uptake is increasing. The NCSC report found 30% of micro and SME businesses became compliant with CE for the first time this year, with 4% of micro organisations signing up to CE and 17% to CE+.

The Cyber Security Longitudinal Survey Wave 2, which only covers medium, large and very large companies, reports a higher uptake, with 25% adhering to CE, 11% to CE+ and 17% to ISO 27001. It did find that organisations were more likely to adhere to one of the standards if they had experienced a cyber incident in the last twelve months and this is worrying as it suggests even those companies with access to more resources are not acting until after they’ve been breached.

Why standards are the perfect way to increase cyber hygiene

The tide is turning, however, with 35% of businesses being motivated to get CE compliant to generally improve security, compared to 22% pursuing compliance to bid on government contracts and 15% for commercial contracts. Several initiatives have also sought to spread the word and in January 2023 the NCSC launched its Funded CE Programme offering financial assistance for those seeking accreditation.

From a cyber hygiene perspective, CE provides a comprehensive basis with five technical controls covering boundary firewalls and internet gateways, secure configurations, user access controls, malware protection and patch management. Today, however, only a fifth of businesses currently comply with all five, according to the Breaches Survey. Of those that do fully comply, 66% had experienced an incident according to the Longitudinal Survey, which meant they only went ‘all in’ after the event, at which point they realised the value of the controls in enabling them to identify and manage incidents.

In contrast to CE, which is driven by the UK Government, ISO 27001 is an international standard and demonstrates an organisational commitment to managing information security. Last year it was consolidated down from 14 to four areas: Organisational, People, Physical and Technological. The list of controls was cut from 114 to 93, with 11 new ones added, while 57 have been merged and some removed, and five new attributes have been introduced to align with digital security. All changes which make it much more relevant to SMEs.

ISO27001 can take some time to achieve but is valid for three years while CE and CE+ are renewed annually. CE is a self-assessment while CE+, an extension of CE, requires third party involvement with an assessor carrying out a technical audit and vulnerability scans.

What is clear is that poor cyber hygiene can leave the business open to attack but that putting in place a minimum level of security can significantly reduce the chance of being compromised. These baseline standards all provide a route for organisations that are short on time and resources to improve their cyber hygiene. In fact, the NCSC states that 80% fewer cyber insurance claims are made with CE in place, revealing just how effective making these small changes can be when it comes to mitigating attacks. So rather than viewing such compliance as an outlay, organisations need to view these standards as a vital investment in protecting their processes and assets.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *


Driving business success in today’s data-driven world through data governance

Source: Finance derivative

Andrew Abraham, Global Managing Director, Data Quality, Experian

It’s a well-known fact that we are living through a period of digital transformation, where new technology is revolutionising how we live, learn, and work. However, what this has also led to is a significant increase in data. This data holds immense value, yet many businesses across all sectors struggle to manage it effectively. They often face challenges such as fragmented data silos or lack the expertise and resources to leverage their datasets to the fullest.

As a result, data governance has become an essential topic for executives and industry leaders. In a data-driven world, its importance cannot be overstated. Combine that with governments and regulatory bodies rightly stepping up oversight of the digital world to protect citizens’ private and personal data. This has resulted in businesses also having to comply e with several statutes more accurately and frequently.

We recently conducted some research to gauge businesses’ attitudes toward data governance in today’s economy. The findings are not surprising: 83% of those surveyed acknowledged that data governance should no longer be an afterthought and could give them a strategic advantage. This is especially true for gaining a competitive edge, improving service delivery, and ensuring robust compliance and security measures.

However, the research also showed that businesses face inherent obstacles, including difficulties in integration and scalability and poor data quality, when it comes to managing data effectively and responsibly throughout its lifecycle.

So, what are the three fundamental steps to ensure effective data governance?

Regularly reviewing Data Governance approaches and policies

Understanding your whole data estate, having clarity about who owns the data, and implementing rules to govern its use means being able to assess whether you can operate efficiently and identify where to drive operational improvements. To do that effectively, you need the right data governance framework. Implementing a robust data governance framework will allow businesses to ensure their data is fit for purpose, improves accuracy, and mitigates the detrimental impact of data silos.

The research also found that data governance approaches are typically reviewed annually (46%), with another 47% reviewing it more frequently. Whilst the specific timeframe differs for each business, they should review policies more frequently than annually. Interestingly, 6% of companies surveyed in our research have it under continual review.

Assembling the right team

A strong team is crucial for effective cross-departmental data governance.  

The research identified that almost three-quarters of organisations, particularly in the healthcare industry, are managing data governance in-house. Nearly half of the businesses surveyed had already established dedicated data governance teams to oversee daily operations and mitigate potential security risks.

This strategic investment highlights the proactive approach to enhancing data practices to achieve a competitive edge and improve their financial performance. The emphasis on organisational focus highlights the pivotal role of dedicated teams in upholding data integrity and compliance standards.

Choose data governance investments wisely

With AI changing how businesses are run and being seen as a critical differentiator, nearly three-quarters of our research said data governance is the cornerstone to better AI. Why? Effective data governance is essential for optimising AI capabilities, improving data quality, automated access control, metadata management, data security, and integration.

In addition, almost every business surveyed said it will invest in its data governance approaches in the next two years. This includes investing in high-quality technologies and tools and improving data literacy and skills internally.  

Regarding automation, the research showed that under half currently use automated tools or technologies for data governance; 48% are exploring options, and 15% said they have no plans.

This shows us a clear appetite for data governance investment, particularly in automated tools and new technologies. These investments also reflect a proactive stance in adapting to technological changes and ensuring robust data management practices that support innovation and sustainable growth.

Looking ahead

Ultimately, the research showed that 86% of businesses recognised the growing importance of data governance over the next five years. This indicates that effective data governance will only increase its importance in navigating digital transformation and regulatory demands.

This means businesses must address challenges like integrating governance into operations, improving data quality, ensuring scalability, and keeping pace with evolving technology to mitigate risks such as compliance failures, security breaches, and data integrity issues.

Embracing automation will also streamline data governance processes, allowing organisations to enhance compliance, strengthen security measures, and boost operational efficiency. By investing strategically in these areas, businesses can gain a competitive advantage, thrive in a data-driven landscape, and effectively manage emerging risks.

Continue Reading


‘Aligning AI expectations with AI reality’

By Nishant Kumar Behl, Director of Emerging Technologies at OneAdvanced

AI is transforming the way we work now and will continue to make great strides into the future. In many of its forms, it demonstrates exceptional accuracy and a high rate of correct responses. Some people worry that AI is too powerful, with the potential to cause havoc on our socio-political and economic systems. There is a converse narrative, too, that highlights some of the surprising and often comical mistakes that AI can produce, perhaps with the intention of undermining people’s faith in this emerging technology.

This tendency to scrutinise the occasional AI mishap despite its frequent correct responses overshadows the technology’s overall reliability, creating an unfairly high expectation for perfection. With a singular focus on failure, it is, therefore, no surprise that almost 80% of AI projects fail within a year. Considering all of the hype around AI and particularly GenAI over the past few years, it is understandable that users feel short-changed when their extravagant expectations are not met.

We shouldn’t forget that a lot of the most useful software we all rely on in our daily working lives contains bugs. They are an inevitable and completely normal byproduct of developing and writing code. Take a look at the internet, awash with comments, forums, and advice pages to help users deal with bugs in commonly used Apple and Microsoft word processing and spreadsheet apps.

If we can accept blips in our workhorse applications, why are we holding AI to such a high standard? Fear plays a part here. Some may fear AI can do our jobs to a much higher standard than we can, sidelining us. No technology is smarter than humans. As technology gets smarter, it pushes humans to become smarter. When we collaborate with AI, the inputs of humans and artificial intelligence work together, and that’s when magic happens.

AI frees up more human time and lets us be creative, focusing on more fulfilling tasks while the technology does the heavy lifting. But AI is built by humans and will continue to need people asking the right questions and making connections based on our unique human sensibility and perception if it is to become more accurate, useful, and better serve our purpose.

The fear of failing to master AI implementation might be quite overwhelming for organisations. In some cases, people are correct in being cautious. There is a tendency now to expect all technology solutions to have integrated AI functionality for the sake of it, which is misguided. Before deciding on any technology, users must first identify and understand the problem they are trying to solve and establish whether AI is indeed the best solution. Don’t be blinded by science and adopt the whistles and bells that aren’t going to deliver the best results.

Uncertainty and doubt will continue to revolve around the subject of AI, but people should be reassured that there are many reliable, ethical technology providers developing safe, responsible, compliant AI-powered products. These organisations recognise their responsibility to develop products that offer long-term value rather than generating temporary buzz. By directly engaging with customers to understand their needs and problems, a customer-focused approach helps identify whether AI can effectively address the issues at hand before proceeding down the AI route.

In any organisation, the leader’s job is to develop strategy, ask the right questions, provide direction, and often devise action plans. When it comes to AI, we will all need to adopt that leadership mindset in the future, ensuring we are developing the right strategy, asking insightful questions, and devising an effective action plan that enables the engineers to execute appropriate AI solutions for our needs.

Organisations should not be afraid to experiment with AI solutions and tools, remembering that in every successful innovation, there will be some failure and frustration. The light bulb moments rarely happen overnight, and we must all adjust our expectations so that AI can offer a perfect solution. There will be bugs and problems, but the journey towards improvement will result in achieving long-term and sustainable value from AI, where everyone can benefit.


Nishant Kumar Behl is Director of Emerging Technologies at OneAdvanced, a leading provider of sector-focussed SaaS software, headquartered in the UK.

Continue Reading


Machine Learning Interpretability for Enhanced Cyber-Threat Attribution

Source: Finance Derivative

By: Dr. Farshad Badie,  Dean of the Faculty of Computer Science and Informatics, Berlin School of Business and Innovation

This editorial explores the crucial role of machine learning (ML) in cyber-threat attribution (CTA) and emphasises the importance of interpretable models for effective attribution.

The Challenge of Cyber-Threat Attribution

Identifying the source of cyberattacks is a complex task due to the tactics employed by threat actors, including:

  • Routing attacks through proxies: Attackers hide their identities by using intermediary servers.
  • Planting false flags: Misleading information is used to divert investigators towards the wrong culprit.
  • Adapting tactics: Threat actors constantly modify their methods to evade detection.

These challenges necessitate accurate and actionable attribution for:

  • Enhanced cybersecurity defences: Understanding attacker strategies enables proactive defence mechanisms.
  • Effective incident response: Swift attribution facilitates containment, damage minimisation, and speedy recovery.
  • Establishing accountability: Identifying attackers deters malicious activities and upholds international norms.

Machine Learning to the Rescue

Traditional machine learning models have laid the foundation, but the evolving cyber threat landscape demands more sophisticated approaches. Deep learning and artificial neural networks hold promise for uncovering hidden patterns and anomalies. However, a key consideration is interpretability.

The Power of Interpretability

Effective attribution requires models that not only deliver precise results but also make them understandable to cybersecurity experts. Interpretability ensures:

  • Transparency: Attribution decisions are not shrouded in complexity but are clear and actionable.
  • Actionable intelligence: Experts can not only detect threats but also understand the “why” behind them.
  • Improved defences: Insights gained from interpretable models inform future defence strategies.

Finding the Right Balance

The ideal model balances accuracy and interpretability. A highly accurate but opaque model hinders understanding, while a readily interpretable but less accurate model provides limited value. Selecting the appropriate model depends on the specific needs of each attribution case.

Interpretability Techniques

Several techniques enhance the interpretability of ML models for cyber-threat attribution:

  • Feature Importance Analysis: Identifies the input data aspects most influential in the model’s decisions, allowing experts to prioritise investigations.
  • Local Interpretability: Explains the model’s predictions for individual instances, revealing why a specific attribution was made.
  • Rule-based Models: Provide clear guidelines for determining the source of cyber threats, promoting transparency and easy understanding.

Challenges and the Path Forward

The lack of transparency in complex ML models hinders their practical application. Explainable AI, a field dedicated to making models more transparent, holds the key to fostering trust and collaboration between human and machine learning. Researchers are continuously refining interpretability techniques, with the ultimate goal being a balance between model power and decision-making transparency.

Continue Reading

Copyright © 2021 Futures Parity.