Connect with us

Technology

The fundamentals of data sovereignty

Paul Thomas, head of engineering, CareScribe 

In a technological world increasingly reliant on “The Cloud”, data can have a nasty habit of being sent in many directions with little control or understanding of where it has been or where it ends up. It’s important that we, as consumers, understand how our data is being stored and used.

This is even more relevant to those who rely on Assistive Technology (AT) – a term used for assistive, adaptive, and rehabilitative devices for people with disabilities. This can include everything from captioning software and speech to text, to wheelchairs and other mobility aids. For these people, using technology may not be optional, but rather necessary, to live a life without barriers. Therefore, it’s paramount that they are empowered to make decisions about what technology they use based on a proper understanding of how it works and where their data ends up.

What is data sovereignty?

Data Sovereignty means that data is subject to the laws and governance structures within the nation it is processed. Different nations will have different laws surrounding the use and storage of data. For those in the UK and EU, you’ll likely be familiar with General Data Protection Regulation (GDPR) and even after Brexit, thanks to the Adequacy Decision in 2021, data is able to flow freely between the UK and EU.

Why should you care?

Perhaps you work with confidential information such as a customer’s personal details, business information or other data which, if leaked, could result in loss of privacy or intellectual property. It’s therefore important to understand how this data will be stored and the laws and governance around its use. This is where Data Sovereignty comes in, as knowing where it is stored means you can understand how it can be used.

Your company, place of work or study may also have rules in place around where data can be stored and processed for these very reasons, so it’s important to check that you are not breaking any policies by your data being transferred where it shouldn’t be.

Using The Cloud

Just about all online software, including Assistive Technology software, will may store or process data in the Cloud. The thing to bear in mind is that Cloud use often entails international data transfers, which has the potential to create compliance issues for users as data stored in The Cloud may be under the jurisdiction of more than one country’s laws.

It’s worth knowing whether or not the software you’re choosing to use involves these international data transfers and which nation’s laws the data is subject to. This will hopefully ensure you feel empowered with the knowledge of where your data is being kept and what rules your Assistive Technology supplier is abiding by.

What to look for

We believe that any providers of Assistive Technology you are using should provide transparent information regarding data sovereignty. Here at CareScribe, we store and process your data within the EU and so abide by EU (and UK) data laws. We never leverage your data elsewhere because we make tech for those who need it most, with the aim of levelling the playing field. It’s at the core of who we are.

The most important thing to remember is that this information isn’t to scare people, but rather to empower Assistive Technology users with the information about how their tech and the laws they abide by may function.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Time is running out: NHS and their digital evolution journey

By Nej Gakenyi, CEO and Founder of GRM Digital

Many businesses have embarked on their digital evolution journey, transforming their technology offerings to upgrade their digital services in an effective and user-friendly way. Whilst this might be very successful for smaller and newer businesses, but for large corporations with long-standing legacy infrastructure, what does this mean? Recently the UK government pledged £6bn of new funding for the NHS, and the impact this funding and investment could have if executed properly, could revolutionise the UK public healthcare sector.

The NHS has always been a leader in terms of technology for medical purposes but where it has fallen down is in the streamlining of patient data, information and needs, which can lead to a breakdown in trust and the faith that the healthcare system is not a robust one. Therefore, the primary objective of additional funding must be to implement advanced data and digital technologies, to improve the digital health of the NHS and the overall health of the UK population, as well as revitalise both management efficiency and working practices.

Providing digital care

Digitalisation falls into two categories when it comes to the NHS – digitising traditionally ‘physical’ services like offering remote appointments and keeping electronic paper records, and a greater reliance on more innovative approaches driven by advances in technology. It is common knowledge that electronic services differ in GP practices across the country; and to have a drastically good or bad experience which is solely dependent on a geographical lottery contradicts the very purpose of offering an overarching healthcare provision to society at large.

By streamlining services and investing in proper infrastructure, a level playing field can be created which is vital when it comes to patients accessing both the care they need and their own personal history of appointments, GP interactions, diagnoses and medications. Through this approach, the NHS focus on creating world-leading care, provision of that care and potentially see waiting lists decrease due to the effective diagnosis and management enabled by slick and efficient technology.

This is especially important when looking at personalisedhealth support and developing a system that enables patients to receive care wherever they are and helps them monitor and manage long-term health conditions independently. This, alongside ensuring that technology and data collection supports improvements in both individual and population-level patient care, can only serve to streamline NHS efforts and create positive outcomes for both the patient and workforce.

Revolutionising patient experiences

A robust level of trust is critical to guaranteeing the success of any business or provision. If technology fails, so does the faith the customer or consumer has in the technology being designed to improve outcomes for them. An individual will always have some semblance of responsibility and ownership over their lives, well-being and health. Still, all of these key pillars can only stand strong when there is infrastructure in place to help drive positive results. Whilst there may be risks of excluding some groups of individuals with a digital-first approach, technology solutions can empower people to take control of their healthcare enabling the patient and NHS to work together. Tandem efforts between humans and technology

Technology must work in tandem with a workforce for it to be effective. This means the NHS workforce must be digitally savvy and have patient-centred care at the front and centre of all operations. Alongside any digital transformation the NHS adopts to improve patient outcomes, comes the need to assess current and future capability and capacity challenges, and build a workforce with the right skills to help shape an NHS that is fit for purpose.

This is just the beginning. With more invtesement and funding being allocated for the NHS this is the starting point, but for NHS decision-makers to ensure real benefits for patients, more still needs to be done. Effective digital evolution holds the key. Once the NHS has fully harnessed the poer of new and evolving technologies to change patient experiences throught the UK, with consistent communication and care, this will set the UK apart and will mark the NHS has a diriving example for accessible, digital healthcare.

Continue Reading

Technology

Ethical AI: Preparing Your Organisation for the Future of AI

Rosemary J Thomas, Senior Technical Researcher, AI Labs Version 1

Artificial intelligence is changing the world, generating countless new opportunities for organisations and individuals. Conversely, it also poses several known ethical and safety risks, such as bias, discrimination, privacy violations, alongside its potential to negatively impact society, well-being, and nature. It is therefore fundamental that this groundbreaking technology is approached with an ethical mindset, adapting practices to make sure it is used in a responsible, trustworthy, and beneficial way.

To achieve this, first we need to understand what an ethical AI mindset is, why it needs to be central, and how we can establish ethical principles and direct behavioural changes across an organisation. We must then develop a plan to steer ethical AI from within and be prepared to take liability for the outcomes of any AI system.

What is an ethical AI mindset

An ethical AI mindset is one that acknowledges the technology’s influence on people, society, and the world, and understands its potential consequences. It is based on the perception that AI is a dominant force that can sculpt the future of humankind. An ethical AI mindset ensures AI is allied with human principles and goals, and that it is used to support the common good and the ethical development of all.

It is not only about preventing or moderating the adverse effects of AI, but also about exploiting its immense capability and prospects. This includes developing and employing AI systems that are ethical, safe, fair, transparent, responsible, and inclusive, and that respect human values, autonomy, and diversity. It also means ensuring that AI is open, reasonably priced, and useful for everyone – especially the most susceptible and marginalised clusters in our society.

Why you need an ethical AI mindset

Functioning with an ethical AI mindset is essential[1].  Not only because it is the right thing to do, but also because it is expected, with research showing customers are far less likely to buy from unethical establishments. As AI evolves, the expectation for businesses to use it responsibly will continue to grow.

Adopting an ethical AI mindset can also help in adhering to current, and continuously developing, regulation and guidelines. Governing bodies around the world are establishing numerous frameworks and standards to make sure AI is used in an ethical and safe way and, by creating an ethical AI mindset, we can ensure AI systems meet these requirements, and prevent any prospective fines, penalties, or court cases.

Additionally, the right mindset will promote the development of AI systems that are more helpful, competent, and pioneering. By studying the ethical and social dimensions of AI, we can invent systems that are more aligned with the needs, choices, and principles of our customers and stakeholders, and can provide moral solutions and enhanced user experiences.

Ethical AI as the business differentiator

Fostering an ethical AI mindset is not a matter of singular choice or accountability, it is a united, organisational undertaking. To integrate an ethical culture and steer behavioural changes across the business, we need to take a universal and methodical approach.

It is important that the entire workforce, including executives and leadership, are educated on the need for AI ethics and its use as a business differentiator[2]. To achieve this, consider taking a mixed approach to increase awareness across the company, using mediums such as webinars, newsletters, podcasts, blogs, or social media. For example, your company website can be used to share significant examples, case studies, best practices, and lessons learned from around the globe where AI practices have effectively been implemented. In addition, guest sessions with researchers, consultants, or even collaborations with academic research institutions can help to communicate insights and guidance on AI ethics and showcase it as a business differentiator.

It is also essential to take responsibility for the consequences of any AI system that is developed for practical applications, despite where organisations or products sits in the value chain. This will help build credibility and transparency with stakeholders, customers, and the public.

Evaluating ethics in AI

We cannot monitor or manage what we cannot review, which is why we must establish a method of evaluating ethics in AI. There are a number of tools and systems than can be used to steer ethical AI, which can be supported by ethical AI frameworks, authority structures and the Ethics Canvas.

An ethical AI framework is a group of values and principles that acts as a handbook for your organisation’s use of AI. This can be adopted, adapted, or built to suit your organisation’s own goals and values, with the stakeholders involved in its creation. An example of this can be seen in the UK Government’s Ethical AI Framework[3], and the Information Commissioner’s Office’s AI and data protection risk toolkit[4] which covers all ethical risks in the lifecycle stages – from business requirements and design to deployment and monitoring for AI systems.

An ethical AI authority structure is a group of roles, obligations and methods that make sure your ethical AI framework is followed and reviewed. You can establish an ethical AI authority structure that covers several aspects and degrees of your organisation and delegates clear obligations to each stakeholder.

The Ethics Canvas can be used in AI engagements to help build AI systems with ethics integrated into development. It helps teams identify potential ethical issues that could arise from the use of AI and develop guidelines to avoid them. It also promotes transparency by providing clear explanations of how the technology works and how decisions are made and can further increase stakeholder engagement to gather input and feedback on the ethical aspects of the AI project. This canvas helps to structure risk assessment and can serve as a communication tool to convey the organisation’s commitment to ethical AI practices.

Ethical AI implications

Any innovation process, whether it involves AI or not, can be marred a fear of failure and the desire to be successful in the first attempt. But failures should be regarded as lessons and used to improve ethical experiences in AI.

To ensure AI is being used responsibly, we need to identify what ethics means in the context of our business operations. Once this has been established, we can personalise our message to the target stakeholders, staying within our own definition of ethics and including the use of AI within our organisation’s wider purpose, mission, and vision.

In doing so, we can draw more attention towards the need for responsible use policies and an ethical approach to AI, which will be increasingly important as the capabilities of AI evolve, and its prevalence within businesses continues to grow.


[1] https://www.mckinsey.com/featured-insights/in-the-balance/from-principles-to-practice-putting-ai-ethics-into-action

[2] https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2023.1258721/full

[3] https://www.gov.uk/guidance/understanding-artificial-intelligence-ethics-and-safety

[4] https://ico.org.uk/for-organisations/uk-gdpr-guidance-and-resources/artificial-intelligence/guidance-on-ai-and-data-protection/ai-and-data-protection-risk-toolkit/

Continue Reading

Business

Driving Business Transformation Through AI Adoption – A Roadmap for 2024

Author: Edward Funnekotter, Chief Architect and AI Officer at Solace

From the development of new products and services, to the establishment of competitive advantages, Artificial intelligence (AI) can fundamentally reshape business operations across industries. However, each organisation is unique and as such navigating the complexities of AI, while applying the technology in an efficient and effective way, can be a challenge.

To unlock the transformational potential of AI in 2024 and integrate it into business operations in a seamless and productive way, organisations should seek to follow these five essential steps:

  • Prioritise Data Quality and Quantity

Usefulness of AI models is directly correlated to the quantity and quality of the data used to train them, necessitating effective integration solutions and strong data governance practices. Organisations should seek to implement tools that provide a wealth of clean, accessible and high-quality data that can power quality AI.

Equally, AI systems cannot be effective if an organisation has data silos. These impede the ability for AI to digest meaningful data, and then provide the insights that are needed to drive business transformation. Breaking down data silos needs to be a business priority – with investment in effective data management, and an application of effective data integration solutions.

  • Develop your own unique AI platform

The development of AI applications can be a laborious process, impacting the value that businesses are gaining from them in the immediate term. This can be expedited by platform engineering, which modernises enterprise software delivery to facilitate digital transformation, optimising developer experience and accelerating the ability to deliver customer value for product teams. The use of platform engineering offers developers pre-configured tools, pre-built components and automated infrastructure management, freeing them up to tackle their main objective; building innovative AI solutions faster.

While the development of AI applications that can help streamline infrastructure, automate tasks, and provide pre-built components for developers is the end goal, it’s only possible if the ability to design and develop is there in the first place. Gartner’s prediction that Platform Engineering will come of age in 2024 is a particularly promising update.

  • Put business objectives at the heart of AI adoption – can AI deliver?

Any significant business change needs to be managed strategically, and with a clear indication of the aims and benefits they will bring. While a degree of experimentation is always necessary to drive business growth, these shouldn’t be at the expense of operational efficiency.

Before onboarding AI technologies, look internally at the key challenges that your business is facing and question “how can AI help to address this?” You may wish to enhance the customer experience, streamline internal processes or use AI systems to optimise internal decision-making. Be sure the application of AI is going to help, not hinder you on this journey

Also remember that AI remains in its infancy, and cannot be relied upon as a silver bullet for all operational challenges. Aim to build a sufficient base knowledge of AI capabilities today, and ensure these are contextualised within your own business requirements. This ensures that AI investments aren’t made prematurely, providing an unnecessary cost.

  1. Don’t be limited by legacy systems

Owing to the complex mix of legacy and/or siloed systems that organisations employ, they may be restricted in their ability to use real-time and AI-driven operations to drive business value. For example, IDC found that only 12% of organisations connect customer data across departments.

Amidst the ‘AI data rush’ there will be a greater need for event-driven integration, however, only an enterprise architecture pattern will ensure new and legacy systems are able to work in tandem. Without this, organisations will be prevented from offering seamless, real-time digital experiences, linking events across departments, locations, on-premises systems, IoT devices, in a cloud or even multi-cloud environment.

  • Leverage real-time technology

Keeping up with the real-time demands of AI can pose a challenge for legacy data architectures used by many organisations. Event mesh technology – an approach to distributed networks that enable real-time data sharing and processing – is a proven way of reducing these issues. By applying event-driven architecture (EDA), organisations can unlock the potential of real-time AI, with automated actions and informed decision making using relevant insights and automated actions.

By applying AI in this way, businesses can offer stronger, more personalised experiences – including the delivery of specialised offers, real-time recommendations and tailored support based on customer requirements. An example of this is in predictive maintenance, in which AI is able to analyse and anticipate future problems or business-critical failures, ahead of them affecting operations, and dedicate the correct resources to fix the issue, immediately. By implementing EDA as a ‘central nervous system’ for your data, not only is real-time AI possible, but adding new AI agents becomes significantly easier.

Ultimately, AI adoption needs to be strategic, avoiding chasing trends and focusing instead on how and where the technology can deliver true business value. Following the steps above, organisations can ensure they are leveraging the full transformative benefit of AI and driving business efficiency and growth in a data driven era.

AI can be a highly effective tool. However, its success is dependent on how it is being applied by organisations, strategically,  to meet clearly defined and specific business goals.

Continue Reading

Copyright © 2021 Futures Parity.