Connect with us

Technology

How to transform public services and eliminate IT debt

Paul Liptrot, Partner – UK Government & Public Sector at Kyndryl

Any organisation embarking on a digital transformation project will know that there are significant challenges to overcome. While the benefits at the end are impossible to ignore, the barriers to success are complex and wide ranging. Add in legacy public sector applications to the process, and you open up a whole new layer of complexity.

Whether it’s cost-cutting pressures or competing interests holding back the process, product development and delivery can be significantly harder than it is in the private sector. Not just that, but across Whitehall it is broadly recognised that there’s a significant amount of technical debt to deal with too, meaning many Departments are typically starting from a place far behind that of their commercial counterparts.

Technical debt has been acquired in the public sector over several years (even decades), and for many reasons. Predominantly, ongoing budget constraints and loss of resources mean it has been easy to put off long-term, labour-intensive maintenance and modernisation projects, in favour of feel-good ‘quick wins’ that can be delivered quickly and the impact felt fast.

Furthermore, legacy, proprietary platforms that many public sector systems are developed and operating on, have undergone significant customisation and bespoke development over many years. Not only do these support mission-critical processes, making it hard to get downtime for improvements approved, but they are notoriously tricky (read: time-consuming) to unpick and migrate to newer, more modern platforms and infrastructures.

Too often, both public and private sector organisations have had a superficial opinion of what it takes to address tech debt, focusing on dealing with the often out-of-support components rather than the underlying problems. As a result, tech debt is only getting worse, reducing workforce productivity, inhibiting innovation and keeping public services stagnant through a lack of agility.

For public sector organisations looking to address the problem, the first step must be to assess and understand which services are going to be needed to deliver current policy and customer outcome objectives. With this clarity organisations can then design their Target Operate Model (TOM) to not only support these but also put in place processes and technologies that will be agile enough to adopt and adapt for the future. Understanding the TOM will then allow CXOs and their teams to make informed decisions on which approaches to take, which technologies to retain, which to replace and which to adopt public and private cloud services for in an overall hybrid environment.

With a TOM in place, priorities can be established, required investments can be understood and organisation teams and squad structures can be established within a governed program to remove tech debt. Informed decisions can be taken per business service or application with options typically including:

Retire and start new

Evaluate whether it is even worth modernising the underlying technologies. Would it be easier or more beneficial to instead migrate the business processes and data to something else? This could be a modern SaaS platform or a lighter-weight, micro-service-based replacement application. Where suitable SaaS products exist, moving legacy services to these transfers the risks and costs of developing and maintaining underlying application software and infrastructure to the SaaS provider.

Re-platform

Where SaaS isn’t an option, building products on PaaS can reduce the risk of tech debt by moving the responsibilities for maintaining, patching and updating the underlying infrastructure, operating systems and middleware to the cloud provider. As PaaS migrations involve refactoring legacy applications to fit the platform offered by a service provider, it can be time consuming. However, once it’s complete, it takes the maintenance time away from programmers, freeing them up to deploy new applications faster.

“Lift and shift”

One of the easier and least expensive ways to migrate an existing workload to the cloud is an IaaS migration, sometimes called a “lift and shift”. That’s because you move it in its current form, with minimal changes, and run it on cloud-native resources instead. This is particularly useful where the technical debt resides in hosting locations and physical hardware, but typically isn’t the answer where the debt relates to applications or software. In these cases, a lift and shift can be useful as part of a longer term program, by buying more time to modernise the applications once into Public Cloud and freed from the immediate risks associated with ageing hardware or building closures.

Modernise in-situ

You could choose to slowly migrate a legacy system by replacing specific components over a period of time. There are many variants to this approach but one of the most common is the “Strangler Fig” pattern which involves creating a parallel new landing zone and slowly redirecting from the existing application components to the new ones as replacement functionality and services are implemented. Eventually, the legacy is retired completely.

Whichever methods are adopted, once tech debt has been eliminated, that’s not the end of the road. Maintaining the TOM is an ongoing monitoring exercise to ensure that it doesn’t begin to accumulate again. Here are six top tips to avoid it recuring:

  1. Run teams that maintain and constantly update products rather than big periodic projects. The ongoing level of investment may seem high but the TCO will be lower, with the product itself remaining evergreen, supporting innovation and avoiding build-up of tech debt.
  2. Maintain tech roadmaps and review regularly, with tech refresh built into investment cases up front.
  3. Implement modern DevSecOps and Infrastructure as Code methodologies to ensure that services can be updated and re-deployed easily.
  4. Consider options like Low-code/No-code and RPA for building applications and workflows where these are suitable, avoiding the need to write and maintain code.
  5. Design for micro-services over monoliths. This allows parts of applications to be updated and modernised in isolation from the rest of the service. Principles of abstraction help here, as well as designing for components/services to be joined up using APIs and loose-coupling.
  6. Feeling overwhelmed? Engage a partner you trust that can help advise, build and, if required, operate your technology transformation.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

Building a Greener Web: Six Way to Put Your Website on an Emissions Diet

By Roberta Haseleu, Practice Lead Green Technology at Reply, Fiorenza Oppici, Live Reply, and Lars Trebing, Vanilla Reply

Most people are unaware or underestimate the impact of the IT sector on the environment. According to the BBC: “If we were to rather crudely divide the 1.7 billion tonnes of greenhouse gas emissions estimated to be produced in the manufacture and running of digital technologies between all internet users around the world, it would mean each of us is responsible for 414kg of carbon dioxide a year.” That’s equivalent to 4.7bn people charging their smartphone 50,000 times.

Every web page produces a carbon footprint that varies depending on its design and development. This must be more closely considered as building an energy efficient website also increases loading speeds which leads to better performance and user experience.

Following are six practical steps developers can take to reduce the environmental impact of their websites.

  1. Implement modularisation

With traditional websites that don’t rely on single page apps, each page and view of the site is saved in individual html files. The code only runs, and the data is only downloaded, for the page that the user is visiting, avoiding unnecessary requests. This reduces transmitted data volume and saves energy.

However, this principle is no longer the standard in modern web design which is dominated by single page apps which dynamically display all content to the user at runtime. This approach is easier and faster to code and more user-friendly but, without any precautions, it creates unnecessary overheads. In the worst case, accessing the homepage of a website may trigger the transmission of the entire code of the application, including parts that may not be needed.

Modularisation can help. By dividing the code of a website into different modules, i.e. coherent code sections, only the relevant code is referenced. Using modules offers distinct benefits: they keep the scope of the app clean and prevent ‘scope creeps’; they are loaded automatically after the page has been parsed but before the Document Object Model (DOM) is rendered; and, most importantly for green design, they facilitate ‘lazy loading’.

  • Adopt lazy loading

The term lazy loading describes a strategy of only loading resources at the moment they are needed. This way, a large image at the bottom of the page will not be loaded unless the user scrolls down to that section.

If a website only consists of a routing module and an app module which contain all views, the site will become very heavy and slow at first load. Smart modularisation, breaking down the site into smaller parts, in combination with lazy loading can help to load only the relevant content when the user is viewing that part of the page.

However, this should not be exaggerated either as, in some instances, loading each resource only in the last moment while scrolling can annihilate performance gains and result in higher server and network loads. It’s important to find the right balance based on a good understanding of how the app will be used in real life (e.g. whether users will generally rather continue to the next page after a quick first glance, or scroll all the way down before moving on).

  • Monitor build size

Slimming website builds is possible not only at runtime but also at a static level. Typically, a web app consists of a collection of different typescript files. To build a site and compile the code from typescript to JavaScript, a web pre-processor is used.

Pre-processors come with the possibility to prevent a build to complete if its files are bigger than a variable threshold. Limits can be set both for the main boot script as well as the single chunks of CSS to be no bigger than a specific byte size after compilation. Any build surpassing those thresholds fails with a warning.

If a build is suspiciously big, a web designer can inspect it and identify which module contributes the most, as well as all its interdependencies. This information allows the programmer to optimise the parts of the websites in question.

  • Eliminate unused code

One potential reason for excessive build sizes can be dozens of configuration files and code meant for scenarios that are never needed. Despite never being executed, this code still takes up bandwidth, thereby consuming extra energy.

Unused parts can be found in own source code but also (and often to a greater extent) in external libraries used as dependencies. Luckily, a technique called ‘tree shaking’ can be used to analyse the code and mark which parts are not referenced by other portions of the code.

Modern pre-processors perform ‘tree shaking’ to identify unused code but also to exclude it automatically from the build. This allows them to package only those parts of the code that are needed at runtime – but only if the code is modularised.

  • Choose external libraries wisely

One common approach to speed up the development process is by using external libraries. They provide ready-to-use utilities written and tested by other people. However, some of these libraries can be unexpectedly heavy and weigh your code down.

One popular example is Moment.js, a very versatile legacy library for handling international date formats and time zones. Unfortunately, it is also quite big in size. Most of all, it is neither very compatible with the typical TypeScript world nor is it modular. This way, also the best pre-processors cannot reduce the weight that it adds to the code by means of ‘tree shaking’.

  • Optimise content

Designs can also be optimised by avoiding excessive use of images and video material. Massive use of animation gimmicks such as parallax scrolling also has a negative effect. Depending on the implementation, such animations can massively increase the CPU and GPU load on the client. To test this, consider running the website on a 5 to 10-year-old computer. If scrolling is not smooth and/or the fans jump to maximum speed, this is a very good indication of optimisation potential.

The amount of energy that a website consumes — and thus its carbon footprint — depends, among other factors, on the amount of data that needs to be transmitted to display the requested content to users. By leveraging the six outlined techniques above, web designers can ‘slim’ their websites and contribute to the creation of a more sustainable web whilst boosting performance and user experience in the process.

Continue Reading

Business

The Role of Software Development in Shaping the FinTech Industry in 2023 and Beyond

Source: Finance Derivative

Paul Blowers, Commercial Director at Future Processing

As another year passes, now is the time for company leaders to look back at the last 12 months and consider what’s in store for their FinTech businesses in 2023. One of the biggest impacts of last year was undoubtedly the cost of living crisis and increasing interest rates, leading to UK FinTech investment dropping to $9.6 billion in the first half of 2022 – down from $27.8 billion in the same period in 2021. Whilst these challenges remain at the forefront of the industry, there are plenty of innovative developments and technologies evolving in the FinTech space right now that will continue the pace of change. It’s vital for organisations to keep abreast of these trends, to ensure they can remain competitive and continue providing customers with the highest quality products and services.

Innovations in FinTech

In recent years, we have seen larger banks begin to invest more heavily in BaaS (banking as a service). BaaS is a start-to-finish process that digital banks and third parties use to connect their own business infrastructure to a bank’s system via APIs. This allows digital banks or third parties to offer full-banking services directly through their non-bank business offerings. Typically, BaaS is associated with smaller banks due to the favourable interchange rates under $10 billion (in assets) that these banks have. With a bigger focus on commercial BaaS efforts, we can predict seeing more vertical partnerships with SaaS providers who already have existing relationships with businesses.

An alternative to providing BaaS is to pursue an embedded FinTech strategy. Embedded FinTech refers to the integration of FinTech products and services into financial institutions’ websites, mobile apps, and business processes. This has been growing at pace since the COVID pandemic and is expected to continue on its upward trajectory, accelerating eCommerce, financial digitalisation and consumer expectations. As a result, we can expect that more platforms will be diversifying their service offerings as they deepen their relationships with small business customers.

Another topic that has been circulating in the FinTech sector is the intervention of AI and chatbots. 2023 is set to be the year that this technology fully takes off and integrates with mainstream banks and FinTech. Chatbots can be defined as rule-based systems which can perform routine tasks with general FAQs. The primary goal of these AI-drive chatbots is to provide human-like support for customers, communicating with them, introducing services, answering their questions and receiving any complaints.

Software Development for FinTechs

As banks continue to invest in new technologies and leverage the benefits of adopting BaaS, embedded finance and AI, the focus on software development services also increases. Software is at the heart of every FinTech business, as each product or service demands a high-quality implementation, from both existing and potential customers. One of the biggest expectations is around user experience, as FinTech leaders aim to provide a straightforward, transparent and concise solution to their customer’s business problems. Additionally, security can not be underestimated with the FinTech industry under constant risk of cyberattacks and breaches. With exceptional software development, FinTech solutions can both comply with strict security and data encryption standards, whilst offering a polished and streamlined user experience for customers.

The finance industry also comes up against a constant stream of industry regulations, meaning a compliance strategy must be a priority for FinTech’s when considering their software development approach. This links to checking and implementing updates for frameworks and software architectures regularly to ensure app responsiveness, security and performance remain at the forefront. Ultimately, great software increases a FinTech’s opportunity to leverage emerging technologies and keep control over the quality of its service. Timely identification of key trends makes it possible to maximise the digitalisation of finance to drive long-term value for FinTech businesses and their customers. 

The Future of FinTech

Whilst technological developments have been major drivers of FinTech innovation, now is the time to further digitise financial services and the banking sector to build a more inclusive and efficient industry that promotes economic growth. FinTech’s are stepping up to lead, navigate and disrupt the industry during this time of uncertainty, and software development will play a vital role in shaping the future landscape. With the help of software development, FinTech’s will build capabilities and applications that can be easily integrated into the environments where customers are already engaged, meeting their changing needs, new business goals and regulatory demands.

Continue Reading

Business

Will cyberattacks be uninsurable in 2023? Three steps that financial organisations can follow now

Source: Finance Derivative

By James Blake, Field CISO of EMEA, Cohesity

The growing number of cyber attacks and subsequent damage has led to an increasing demand for cyber insurance. Swiss Re Insurance expects total premiums paid to more than double from $10 billion from 2020 to $23 billion by 2025. But this is being questioned by both insurance companies and by customers: is insurance effective, is it feasible, what does it cover and what does it enable?  The CEO of Zurich Insurance, Mario Greco,  said in an interview with the Financial Times recently that cyberattacks will soon become “uninsurable”. Indeed, insurance and prevention have both proved ineffective in stopping cyberattacks like ransomware or in enabling organisations to recover afterwards. Instead organisations must shift their focus onto recovery, What can companies do to meet this challenge? James Blake, Field CISO of EMEA at data management and security provider Cohesity, has three recommendations.

More than 400 million US dollars – that’s how much damage the data leak at Capital One caused in 2019. And the number of such attacks, which have catastrophic consequences for the companies affected, has continued to increase since then. According to Check Point, in the third quarter of 2022 alone, global attacks increased significantly by 28% compared to the same quarter of the previous year.

Where cyber risk used to be limited to areas such as data breaches and third-party liability, ransomware attacks have shifted the damage to core business and accountability. Cyber insurers had to react to the increased risk and have adjusted their offers, as an analysis by Swiss Re Insurance shows. According to PWC, from the insurer perspective, the fast-increasing frequency of ransomware attacks (and the growing associated impacts and ransom demands) and business interruption claims has resulted in cyber becoming a less profitable area of insurance in recent times. The situation has stabilised over the past year as customers have had to pay higher premiums and meet stricter terms and conditions. Swiss Re Insurance expects total premiums paid to more than double from $10 billion to $23 billion by 2025.

More expensive and more difficult to qualify

This is bad news for the financial industry, as insurers are becoming stricter and asking for higher premiums. Cohesity’s legal experts looked at the leading ransomware insurance policies on the market at the end of 2022 and found that ultimately, such guarantees are little more than thinly veiled limitations of liability that benefit the providers – not the customers.

However, there are some measures that companies can use to protect themselves effectively in this new market situation:

  1. The 3-2-1 strategy remains current: keep an isolated copy of the data

In some cases, organisations are required to quarantine an offsite copy of their production records as part of a 3-2-1 strategy to qualify for cyber insurance.

To do this, they can use a SaaS service which keeps an encrypted copy of the production data in the cloud, isolated by a virtual air gap. The data stored there is monitored with multi-layered security functions and machine learning, and anomalies are reported immediately.

  1. Tear down silos and merge data with zero-trust in mind

In general, financial organisations should consolidate all their distributed data on a scalable data management platform and ensure they can backup their data across all their infrastructure and assets. Furthermore, the data must be protected in a zero trust model, where the data is encrypted during transfer and on this storage, access is strictly regulated with rules and multi-factor authentication. In addition, all data stored in it can be managed according to compliance requirements and, thanks to immutable storage, is better protected against ransomware.

  1. Improve collaboration between IT and SecOps teams for cyber resiliency

In addition to these technical measures, financial organisations should optimise the collaboration between their IT and security teams and adopt a data-centric focus on cyber resilience. For too long, many security teams have focused primarily on preventing cyberattacks while IT teams have focused on protecting data including backup and recovery.

A comprehensive data security strategy must unite these two worlds and IT and SecOps teams must work together before the attack takes place. Both teams should be guided by the NIST framework. This holistic approach defines five core disciplines: Identify, Protect, Detect, Respond and Recover.

If a financial company can demonstrate such a mature data security strategy, this will not only have a positive effect on insurance cover, but will generally reduce the risk of incidents and possible consequential damage through failure or data loss.

Continue Reading

Copyright © 2021 Futures Parity.