Connect with us

Business

Why dynamic authorisation is the key to unlocking true zero trust security

Source: Finance Derivative

by Gal Helemski, co-founder and CTO, PlainID

In little more than a decade, ‘zero trust’ has gone from an industry buzzword to the cornerstone of every cybersecurity programme worth its salt. The concept is a simple but effective one – trust makes you vulnerable, so nobody/nothing should be automatically trusted – and in today’s fast-paced, highly data-driven business world, it makes a lot of sense. But like so many things in life, the true effectiveness of zero trust lies in its implementation, and it is here where there remains room for improvement.

Trust must be earned, not given

Security programmes based on zero trust are designed to remove some of the key assumptions that make alternative approaches weak by comparison. Perhaps the best and most common example of such an assumption is that if someone logs into a network with certified user credentials, they are indeed who they claim to be (and will therefore operate responsibly at all times). As many organisations find out to their detriment, user credentials are all too easy to steal and/or lose, meaning the longer a set is used unchallenged, the higher the chance that they may become compromised by someone with malicious intent.

Security should match the way we work today

With the rise of remote working over the last few years making modern workplaces more fragmented than ever before, zero trust has become increasingly important. This is because the ‘walled garden’ approach that traditional perimeter security programmes rely on is no longer applicable to most organisations, particularly those with large, highly dispersed workforces.

Instead, zero trust architecture focuses around one key decision – whether to grant, deny or revoke access to a resource, each and every time a user requests it. While there are a variety of ways to implement this, the U.S. National Institute of Standards and Technology (NIST) has set out a useful framework that emphasises zero trust should never be an exclusive agent of the network alone. Instead, for zero trust to be fully implemented, it must apply three levels of access control:

  1. Access to the network
  2. Access to applications
  3. Access to intra-application assets.

Without this kind of approach, true zero trust protection simply can’t be achieved. Why? Because of the dynamic nature of risk. Today’s digital enterprises are driven by intricate environments containing hundreds of applications, numerous different systems, hybrid legacy and “cloudified,” microservices-driven infrastructures. Such environments support hundreds — or even thousands — of continually evolving roles, which require the constant creation of new access scenarios.

Zero trust technology is still maturing

The good news for security professionals is that there’s an ever-growing range of powerful technologies now available that address some of the basic tenets of zero trust, particularly around advanced authentication and network access control.

However, these technologies still do not address each of the three critical levels of zero trust access control. In fact, the current focus of available zero trust offerings is primarily on the network and does not include adequate reference to, nor support for, zero trust at the application level, or within applications themselves.

For instance, the solutions that are most heavily touted as supporting zero trust include gateway integration and segregation, secure access service edge (SASE), and secure SD-WAN. The problem is, these are all focused on network-centric zero trust when what’s really needed is a solution that addresses each of the three access control levels in turn.

Dynamic authorisation holds the key

For many, the solution is dynamic authorisation – an advanced approach that grants fine-grained access to resources, including data assets, application resources, and any other asset based on the specific context of that session, in real-time.

Dynamic authorisation completes zero trust by powering two of the main processes that are vital to its full and complete realisation: runtime authorisation enforcement and high levels of granularity. When a user attempts to access a network, application or assets within an application, this initiates the evaluation and approval process that focuses on a range of key attributes, including:

  • User level attributes – such as current certification level, role and responsibilities, and whether they can access confidential and personally identifiable information (PII)
  • Asset attributes – such as data classification, location assignments and any relevant metadata
  • The location that a user is authenticating from – including whether from an internal or an external system
  • The number of authentication factors being used – i.e. with single, two factor or multifactor authentication
  • Additional external attributes – such as the risk level of the system and more

The policy engine evaluates each of these and all other relevant attributes, before making a real-time decision on whether to grant access. Furthermore, each time access is attempted, a new decision is made. This process is designed to be extremely granular, evaluating all the attributes that are updated to that specific point in time, as well as the real-time context and environment, rather than attributes that were already predefined by the application.

The business landscape is rapidly evolving, which means cybersecurity must as well. Many organisations have already recognised the importance of a zero trust approach for keeping sensitive data safe in increasingly fragmented working environments. However, the way it is implemented is critical to overall effectiveness. By using dynamic authorisation to address each of the three levels of zero trust access control (access to the network, applications and intra-application assets) business leaders can be confident that users accessing sensitive data are not only who they claim to be, but that they also have the right to do so.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Conflicting with compliance: How the finance sector is struggling to implement GenAI

By James Sherlow, Systems Engineering Director, EMEA, for Cequence Security

GenerativeAI has multiple applications in the finance sector from product development to customer relations to marketing and sales. In fact, McKinsey estimates that GenAI has the potential to improve operating profits in the finance sector by between 9-15% and in the banking sector, productivity gains could be between 3-5% of annual revenues. It suggests AI tools could be used to boost customer liaison with AI integrated through APIs to give real-time recommendations either autonomously or via CSRs, to inform decision making and expedite day-to-day tasks for employees, and to decrease risk by monitoring for fraud or elevated instances of risk.

However, McKinsey also warns of inhibitors to adoption in the sector. These include the level of regulation applicable to different processes, which is fairly low with respect to customer relations but high for credit risk scoring, for example, and the data used, some of is in the public domain but some of which comprises personally identifiable information (PII) which is highly sensitive. If these issues can be overcome, the analyst estimates GenAI could more than double the application of expertise to decision making, planning and creative tasks from 25% without to 56%.

Hamstrung by regulations

Clearly the business use cases are there but unlike other sectors, finance is currently being hamstrung by regulations that have yet to catch up with the AI revolution. Unlike in the EU which approved the AI Act in March, the UK has no plans to regulate the technology. Instead, it intends to promote guidelines. The UK Financial Authorities comprising the Bank of England, PRA, and FCA have been canvassing the market on what these should look like since October 2022, publishing the results (FS2/23 – AI and Machine Learning) a year later which showed a strong demand for harmonisation with the likes of the AI Act as well as NIST’s AI Risk Management Framework.

Right now, this means financial providers find themselves in regulatory limbo. If we look at cyber security, for instance, firms are being presented with GenAI-enabled solutions that can assist them with incident detection and response but they’re not able to utilise that functionality because it contravenes compliance requirements. Decision-making processes are a key example as these must be made by a human, tracked and audited and, while the decision-making capabilities of GenAI may be on a par, accountability in remains a grey area. Consequently, many firms are erring on the side of caution and are choosing to deactivate AI functionality within their security solutions.

In fact, a recent EY report found one in five financial services leaders did not think their organisation was well-positioned to take advantage of the potential benefits. Much will depend on how easily the technology can be integrated into existing frameworks, although the GenAI and the Banking on AI: Financial Services Harnesses Generative AI for Security and Service report cautions this may take three to five years. That’s a long time in the world of GenAI, which has already come a long way since it burst on to the market 18 months ago.

Malicious AI

The danger is that while the sector drags its heels, threat actors will show no such qualms and will be quick to capitalise on the technology to launch attacks. FS2/23 makes the point that GenAI could see an increase in money laundering and fraud through the use of deep fakes, for instance, and sophisticated phishing campaigns. We’re still in the learning phase but as the months tick by the expectation is that we can expect to see high-volume self-learning attacks by the end of the year. These will be on an unprecedented scale because GenAI will lower the technological barrier to entry, enabling new threat actors to enter the fray.

Simply blocking attacks will no longer be a sufficient form of defence because GenAI will quickly regroup or pivot the attack automatically without the need to employ additional resource. If we look at how APIs, which are intrinsic to customer services and open banking for instance, are currently protected, the emphasis has been on detection and blocking but going forward we can expect deceptive response to play a far greater role. This frustrates and exhausts the resources of the attacker, making the attacks cost-prohibitive to sustain.

So how should the sector look to embrace AI given the current state of regulatory flux? As with any digital transformation project, there needs to be oversight of how AI will be used within the business, with a working group tasked to develop an AI framework. In addition to NIST, there are a number of security standards that can help here such as ISO 22989, ISO 23053, ISO 23984 and ISO 42001 and the oversight framework set out in DORA (Digital Operational Resilience Act) for third party providers. The framework should encompass the tools the firm has with AI functionality, their possible application in terms of use cases, and the risks associated with these, as well as how it will mitigate any areas of high risk.

Taking a proactive approach makes far more sense than suspending the use of AI which effectively places firms at the mercy of adversaries who will be quick to take advantage of the technology. These are tumultuous times and we can certainly expect AI to rewrite the rulebook when it comes to attack and defence. But firms must get to grips with how they can integrate the technology rather than electing to switch it off and continue as usual.

Continue Reading

Business

Recognising the value of protecting intellectual property early builds strong foundation for innovators

Innovation Manager at InnoScot Health, Fiona Schaefer analyses an essential facet of developing ideas into innovations

Helping the NHS to innovate remains a key priority during this period of recovery and reform. Even within the current cash-strapped climate, there is the opportunity to maximise the first-hand experience of the healthcare workforce and its knowledge of where new ideas are needed most.

Entrepreneurial-minded, creative staff from any discipline or activity are often best placed to recognise areas for improvement – the reason why a significant number of solutions come from, and are best developed with, health and social care staff.

NHS Scotland is a powerful driver of innovation, but to truly harness the opportunities which new ideas offer for development and commercialisation, the knowledge and intellectual property (IP) underpinning them needs to be protected. That vital know-how and other intangible assets – holding appropriate contracts for example – are key from an early stage.

Medical devices can take years to develop and gain regulatory approval, so from the outset of an idea’s development – and before revenue is generated – filing for IP protection and having confidentiality agreements in place are ways to start creating valuable assets. This is especially important when applying for patent protection because that option is only available when ideas have not been discussed or presented to external parties prior to application.

Without taking that critical initial step to protect IP, anyone – without your permission – could copy the idea, so anything of worth should be protected as soon as possible, making for a clear competitive advantage and ownership in the same sense as possessing physical property.

The common theme is that to be successful – and ultimately support the commercialisation of ideas that will improve patient care and outcomes – the idea must be novel, better, quicker, or more efficient than existing options. Furthermore, to turn it into a sound proposition worth investing in, it must also be technically and financially feasible. It isn’t enough to just be new and novel – the best innovations offer tangible benefits to patient outcomes and staff working practices.

Of course, even more so in the current climate of financial constraints, the key question of ‘Who will pay for your new product or service?’ needs to be considered up front as well.

Whilst development of a strong IP portfolio requires investment and dedicated expertise, when done well and at the appropriate time, then it is resource well spent, offering a level of security whilst developing an asset which can be built upon and traded. There are various ways commercialisation can progress and whilst not all efforts will be successful, intellectual property is an asset which can be licensed or sold to others offering a range of opportunities to secure a good return.

In my experience, however, many organisations including the NHS are still missing the opportunity to recognise and protect their knowledge assets and intellectual property early in the innovation pathway. This is partly due to lack of understanding – sometimes one aspect is carefully protected, whilst another is entirely neglected. In other cases, the desire to accelerate to the next stage of product development means such important foundational steps are not given the attention required for long-term success.

Good IP management goes beyond formally protecting the knowledge assets associated with a project, e.g. by patenting or design registration, however. When considered with other intangible assets such as access to datasets, clinical trial results, standard operating procedures, quality management systems, and regulatory approvals, it is the combination which will be key to success.

Early securing of IP protection or recognition of IP rights in a collaboration agreement, demonstrates foresight and business acumen. Later on, it can significantly boost negotiating power with a licensing partner or build investor confidence.

Conversely, omissions in IP protection or suitable contracts can be damaging, potentially derailing years of product development and exposing organisations to legal challenges and other risks. Failing to protect a promising idea can also mean commercial opportunities are missed, thus leading to your IP being undervalued.

Ideas are evaluated by formal NHS Scotland partner InnoScot Health in the same way whether they are big or small, a product, service, or new, innovative approach to a care pathway.

We encourage and enable all 160,000 NHS Scotland staff, regardless of role or location, to come forward with their ideas, giving them the advice and support they need to maximise their potential benefits.

Protecting the IP rights of the health service is one of the cornerstones of InnoScot Health’s service offering. In fact, to date we have protected over 255 NHS Scotland innovations. Recently these have included design registration and trademarks for the SARUS® hood and trademarks for SCRAM®, building and protecting a recognised range of bags with innovative, intuitive layouts. Spin outs such as Aurum Biosciences meanwhile have patents underpinning their novel therapeutics and diagnostics.

We assist in managing this IP to ensure a return on investment for the health service. Any revenue generated from commercialising ideas and innovations from healthcare professionals is shared with the innovators and the health board through our agreements with them and the revenue sharing scheme detailed in health board IP and innovation policies.

Fundamentally, we believe that it is vital to harness the value of expertise and creativity of staff with a well-considered approach to protecting IP and knowledge input to projects from the start.

Continue Reading

Business

Time is running out: NHS and their digital evolution journey

By Nej Gakenyi, CEO and Founder of GRM Digital

Many businesses have embarked on their digital evolution journey, transforming their technology offerings to upgrade their digital services in an effective and user-friendly way. Whilst this might be very successful for smaller and newer businesses, but for large corporations with long-standing legacy infrastructure, what does this mean? Recently the UK government pledged £6bn of new funding for the NHS, and the impact this funding and investment could have if executed properly, could revolutionise the UK public healthcare sector.

The NHS has always been a leader in terms of technology for medical purposes but where it has fallen down is in the streamlining of patient data, information and needs, which can lead to a breakdown in trust and the faith that the healthcare system is not a robust one. Therefore, the primary objective of additional funding must be to implement advanced data and digital technologies, to improve the digital health of the NHS and the overall health of the UK population, as well as revitalise both management efficiency and working practices.

Providing digital care

Digitalisation falls into two categories when it comes to the NHS – digitising traditionally ‘physical’ services like offering remote appointments and keeping electronic paper records, and a greater reliance on more innovative approaches driven by advances in technology. It is common knowledge that electronic services differ in GP practices across the country; and to have a drastically good or bad experience which is solely dependent on a geographical lottery contradicts the very purpose of offering an overarching healthcare provision to society at large.

By streamlining services and investing in proper infrastructure, a level playing field can be created which is vital when it comes to patients accessing both the care they need and their own personal history of appointments, GP interactions, diagnoses and medications. Through this approach, the NHS focus on creating world-leading care, provision of that care and potentially see waiting lists decrease due to the effective diagnosis and management enabled by slick and efficient technology.

This is especially important when looking at personalisedhealth support and developing a system that enables patients to receive care wherever they are and helps them monitor and manage long-term health conditions independently. This, alongside ensuring that technology and data collection supports improvements in both individual and population-level patient care, can only serve to streamline NHS efforts and create positive outcomes for both the patient and workforce.

Revolutionising patient experiences

A robust level of trust is critical to guaranteeing the success of any business or provision. If technology fails, so does the faith the customer or consumer has in the technology being designed to improve outcomes for them. An individual will always have some semblance of responsibility and ownership over their lives, well-being and health. Still, all of these key pillars can only stand strong when there is infrastructure in place to help drive positive results. Whilst there may be risks of excluding some groups of individuals with a digital-first approach, technology solutions can empower people to take control of their healthcare enabling the patient and NHS to work together. Tandem efforts between humans and technology

Technology must work in tandem with a workforce for it to be effective. This means the NHS workforce must be digitally savvy and have patient-centred care at the front and centre of all operations. Alongside any digital transformation the NHS adopts to improve patient outcomes, comes the need to assess current and future capability and capacity challenges, and build a workforce with the right skills to help shape an NHS that is fit for purpose.

This is just the beginning. With more invtesement and funding being allocated for the NHS this is the starting point, but for NHS decision-makers to ensure real benefits for patients, more still needs to be done. Effective digital evolution holds the key. Once the NHS has fully harnessed the poer of new and evolving technologies to change patient experiences throught the UK, with consistent communication and care, this will set the UK apart and will mark the NHS has a diriving example for accessible, digital healthcare.

Continue Reading

Copyright © 2021 Futures Parity.