How to transform public services and eliminate IT debt
Paul Liptrot, Partner – UK Government & Public Sector at Kyndryl
Any organisation embarking on a digital transformation project will know that there are significant challenges to overcome. While the benefits at the end are impossible to ignore, the barriers to success are complex and wide ranging. Add in legacy public sector applications to the process, and you open up a whole new layer of complexity.
Whether it’s cost-cutting pressures or competing interests holding back the process, product development and delivery can be significantly harder than it is in the private sector. Not just that, but across Whitehall it is broadly recognised that there’s a significant amount of technical debt to deal with too, meaning many Departments are typically starting from a place far behind that of their commercial counterparts.
Technical debt has been acquired in the public sector over several years (even decades), and for many reasons. Predominantly, ongoing budget constraints and loss of resources mean it has been easy to put off long-term, labour-intensive maintenance and modernisation projects, in favour of feel-good ‘quick wins’ that can be delivered quickly and the impact felt fast.
Furthermore, legacy, proprietary platforms that many public sector systems are developed and operating on, have undergone significant customisation and bespoke development over many years. Not only do these support mission-critical processes, making it hard to get downtime for improvements approved, but they are notoriously tricky (read: time-consuming) to unpick and migrate to newer, more modern platforms and infrastructures.
Too often, both public and private sector organisations have had a superficial opinion of what it takes to address tech debt, focusing on dealing with the often out-of-support components rather than the underlying problems. As a result, tech debt is only getting worse, reducing workforce productivity, inhibiting innovation and keeping public services stagnant through a lack of agility.
For public sector organisations looking to address the problem, the first step must be to assess and understand which services are going to be needed to deliver current policy and customer outcome objectives. With this clarity organisations can then design their Target Operate Model (TOM) to not only support these but also put in place processes and technologies that will be agile enough to adopt and adapt for the future. Understanding the TOM will then allow CXOs and their teams to make informed decisions on which approaches to take, which technologies to retain, which to replace and which to adopt public and private cloud services for in an overall hybrid environment.
With a TOM in place, priorities can be established, required investments can be understood and organisation teams and squad structures can be established within a governed program to remove tech debt. Informed decisions can be taken per business service or application with options typically including:
Retire and start new
Evaluate whether it is even worth modernising the underlying technologies. Would it be easier or more beneficial to instead migrate the business processes and data to something else? This could be a modern SaaS platform or a lighter-weight, micro-service-based replacement application. Where suitable SaaS products exist, moving legacy services to these transfers the risks and costs of developing and maintaining underlying application software and infrastructure to the SaaS provider.
Where SaaS isn’t an option, building products on PaaS can reduce the risk of tech debt by moving the responsibilities for maintaining, patching and updating the underlying infrastructure, operating systems and middleware to the cloud provider. As PaaS migrations involve refactoring legacy applications to fit the platform offered by a service provider, it can be time consuming. However, once it’s complete, it takes the maintenance time away from programmers, freeing them up to deploy new applications faster.
“Lift and shift”
One of the easier and least expensive ways to migrate an existing workload to the cloud is an IaaS migration, sometimes called a “lift and shift”. That’s because you move it in its current form, with minimal changes, and run it on cloud-native resources instead. This is particularly useful where the technical debt resides in hosting locations and physical hardware, but typically isn’t the answer where the debt relates to applications or software. In these cases, a lift and shift can be useful as part of a longer term program, by buying more time to modernise the applications once into Public Cloud and freed from the immediate risks associated with ageing hardware or building closures.
You could choose to slowly migrate a legacy system by replacing specific components over a period of time. There are many variants to this approach but one of the most common is the “Strangler Fig” pattern which involves creating a parallel new landing zone and slowly redirecting from the existing application components to the new ones as replacement functionality and services are implemented. Eventually, the legacy is retired completely.
Whichever methods are adopted, once tech debt has been eliminated, that’s not the end of the road. Maintaining the TOM is an ongoing monitoring exercise to ensure that it doesn’t begin to accumulate again. Here are six top tips to avoid it recuring:
- Run teams that maintain and constantly update products rather than big periodic projects. The ongoing level of investment may seem high but the TCO will be lower, with the product itself remaining evergreen, supporting innovation and avoiding build-up of tech debt.
- Maintain tech roadmaps and review regularly, with tech refresh built into investment cases up front.
- Implement modern DevSecOps and Infrastructure as Code methodologies to ensure that services can be updated and re-deployed easily.
- Consider options like Low-code/No-code and RPA for building applications and workflows where these are suitable, avoiding the need to write and maintain code.
- Design for micro-services over monoliths. This allows parts of applications to be updated and modernised in isolation from the rest of the service. Principles of abstraction help here, as well as designing for components/services to be joined up using APIs and loose-coupling.
- Feeling overwhelmed? Engage a partner you trust that can help advise, build and, if required, operate your technology transformation.
Enhancing cybersecurity in investment firms as new regulations come into force
Source: Finance Derivative
Christian Scott, COO/CISO at Gotham Security, an Abacus Group Company
The alternative investment industry is a prime target for cyber breaches. February’s ransomware attack on global financial software firm ION Group was a warning to the wider sector. Russia-linked LockBit Ransomware-as-a-Service (RaaS) affiliate hackers disrupted trading activities in international markets, with firms forced to fall back on expensive, inefficient, and potentially non-compliant manual reporting methods. Not only do attacks like these put critical business operations under threat, but firms also risk falling foul of regulations if they lack a sufficient incident response plan.
To ensure that firms protect client assets and keep pace with evolving challenges, the Securities and Exchange Commission (SEC) has proposed new cybersecurity requirements for registered advisors and funds. Codifying previous guidance into non-negotiable rules, these requirements will cover every aspect of the security lifecycle and the specific processes a firm implements, encompassing written policies and procedures, transparent governance records, and the timely disclosure of all material cybersecurity incidents to regulators and investors. Failure to comply with the rules could carry significant financial, legal, and national security implications.
The proposed SEC rules are expected to come into force in the coming months, following a notice and comment period. However, businesses should not drag their feet in making the necessary adjustments – the SEC has also introduced an extensive lookback period preceding the implementation of the rules, meaning that organisations should already be proving they are meeting these heightened demands.
For investment firms, regulatory developments such as these will help boost cyber resilience and client confidence in the safety of investments. However, with a clear expectation that firms should be well aligned to the requirements already, many will need to proactively step up their security oversight and strengthen their technologies, policies, end-user education, and incident response procedures. So, how can organisations prepare for enforcement and maintain compliance in a shifting regulatory landscape?
In today’s complex, fast-changing, and interconnected business environment, the alternative investment sector must continually take account of its evolving risk profile. Additionally, as more and more organisations shift towards more distributed and flexible ways of working, traditional protection perimeters are dissolving, rendering firms more vulnerable to cyber-attack.
As such, the new SEC rules provide firms with additional instruction around very specific prescriptive requirements. Organisations need to implement and maintain robust written policies and procedures that closely align with ground-level security issues and industry best practices, such as the NIST Cybersecurity framework. Firms must also be ready to gather and present evidence that proves they are following these watertight policies and procedures on a day-to-day basis. With much less room for ambiguity or assumption, the SEC will scrutinise security policies for detail on how a firm is dealing with cyber risks. Documentation must therefore include comprehensive coverage for business continuity planning and incident response.
As cyber risk management comes increasingly under the spotlight, firms need to ensure it is fully incorporated as a ‘business as usual’ process. This involves the continual tracking and categorisation of evolving vulnerabilities – not just from a technology perspective, but also from an administrative and physical standpoint. Regular risk assessments must include real-time threat and vulnerability management to detect, mitigate, and remediate cybersecurity risks.
Another crucial aspect of the new rules is the need to report any ‘material’ cybersecurity incidents to investors and regulators within a 48-hour timeframe – a small window for busy investment firms. Meeting this tight deadline will require firms to quickly pull data from many different sources, as the SEC will demand to know what happened, how the incident was addressed, and its specific impacts. Teams will need to be assembled well in advance, working together seamlessly to record, process, summarise, and report key information in a squeezed timeframe.
Funds and advisors will also need to provide prospective and current investors with updated disclosures on previously disclosed cybersecurity incidents over the past two fiscal years. With security leaders increasingly being held to account over lack of disclosure, failure to report incidents at board level could even be considered an act of fraud.
Organisations must now take proactive steps to prepare and respond effectively to these upcoming regulatory changes. Cybersecurity policies, incident response, and continuity plans need to be written up and closely aligned with business objectives. These policies and procedures should be backed up with robust evidence that shows organisations are actually following the documentation – firms need to prove it, not just say it. Carefully thought-out policies will also provide the foundation for organisations to evolve their posture as cyber threats escalate and regulatory demands change.
Robust cybersecurity risk assessments and continuous vulnerability management must also be in place. The first stage of mitigating a cyber risk is understanding the threat – and this requires in-depth real-time insights on how the attack surface is changing. Internal and external systems should be regularly scanned, and firms must integrate third-party and vendor risk assessments to identify any potential supply chain weaknesses.
Network and cloud penetration testing is another key tenet of compliance. By imitating how an attacker would exploit a vantage point, organisations can check for any weak spots in their strategy before malicious actors attempt to gain an advantage. Due to the rise of ransomware, phishing, and other sophisticated cyber threats, social engineering testing should be conducted alongside conventional penetration testing to cover every attack vector.
It must also be remembered that security and compliance is the responsibility of every person in the organisation. End-user education is a necessity as regulations evolve, as is multi-layered training exercises. This means bringing in immersive simulations, tabletop exercises and real-world examples of security incidents to inform employees of the potential risks and the role they play in protecting the company.
To successfully navigate the SEC cybersecurity rules – and prepare for future regulatory changes – alternative investment firms must ensure that security is woven into every part of the business. They can do this by establishing robust written policies and adhesion, conducting regular penetration testing and vulnerability scanning, and ensuring the ongoing education and training of employees.
Gearing up for growth amid economic pressure: 10 top tips for maintaining control of IT costs
Source: Finance Derivative
By Dirk Martin, CEO and Founder of Serviceware
Three years on from the pandemic and economic pressure is continuing to mount more than ever. With the ongoing threat of a global recession looming, inflation rising, and supply chain disruption continuing to take its toll, cutting costs and optimizing budgets remains a top priority amongst the c-suite. Amid such turbulence, the Chief Financial Officer (CFO) and Chief Innovation Officer (CIO) stand firmly at the business’s helm, not only to steady the ship but to steer it into safer, more profitable waters. These vital roles have truly been pulled into the spotlight in recent years, with new hurdles and challenges being constantly thrown their way. This spring, for example, experts expect British businesses to face an energy-cost cliff edge as the winter support package set out by the government is replaced.
Whilst purse strings are being drawn ever tighter to overcome these obstacles, there is no denying that the digitalization and innovation spurred on by the pandemic are still gaining momentum. In fact, according to Gartner, four out of five CEOs are increasing digital technology investments to counter current economic pressures. Investing in a digital future, driven by technologies such as the Cloud, Artificial Intelligence (AI), Blockchains and the Internet of Things (IoT), however, comes at a cost and to be able to do so – funds must be released through effective optimization of existing assets.
With that in mind, and with the deluge of cost and vendor data descending on businesses who adopt these technologies, never has it been more important for CIOs and CFOs to have a complete, detailed and transparent view of all IT costs. In doing so, business leaders can not only identify the right investment areas but increase the performance of existing systems and technology to tackle the impact of spiralling running costs.
Follow the below 10 steps to gain a comprehensive, detailed and transparent overview of all IT costs to boost business performance and enable your IT to reach the next level.
1: Develop an extensive IT service and product catalogue
The development of an IT service and product catalogue is the most effective way to kick-start your cost-optimization journey. This catalogue should act as a precise overview of all individual IT services and what they entail to directly link IT service costs to IT service performance and value. By offering a clear set of standards as to what services are available and comprised of, consumers can gain an understanding of the costs and values of the IT services they deploy.
2: Monitor IT costs closely
By mastering the value chain, a concept that aims to visualise the flow of IT costs from its most basic singular units through to realised business units and capabilities, businesses can keep track of where IT costs stem from. With the help of service catalogues, benchmarks, the use of a cost model focussing on digital value in IT Financial Management (ITFM) or what is often referred to as Technology Business Management (TBM) solutions, comprehensive access to this data can be guaranteed, creating a ‘cost-to-service flow’ that identifies and controls the availability of IT costs.
3: Determine IT budget management
Knowledge of IT cost allocation is a vital factor when making informed spending decisions and adjustments to existing budgets. There are, however, different approaches that can be taken to this including – centralized, decentralized and iterative. A centralized approach means that the budget is determined in advance and distributed to operating cost centres and projects in a top-down process, allowing for easy, tight budget allocation. A decentralized approach reverses this process – operating costs are precisely calculated before budgeting and projects are determined. Both approaches come with their own risks, for centralized overlooking projects that offer potential growth opportunities and for decentralized budget demands that might exceed available resources.
The iterative approach tries to unify both methods. Although the most lucrative approach, it also requires the most resources. So, the chosen approach is very much dependent on the available resources, and the enterprise’s structural organization.
4: Defining ‘run’ vs ‘grow’ costs
Before IT budget can be allocated, costs should be split into two distinct categories: running costs (i.e. operating costs) and costs for growing the business (i.e. products or services used to transform or grow the business). Once these categories have been defined, decisions should be made on how the budget should be split between them. A 70% run/30% grow split is fairly typical across most enterprises, but there is no one-size-fits-all approach, and this decision should be centred around the businesses’ overall strategies and end goals.
5: Ensuring investments result in a profit
By carrying out the aforementioned steps, complete transparency can be achieved over which products and services are offered, where IT costs stem from, and where budgets are allocated. From here, organizations can review how much of the IT budget is being used and where costs lead to profits and losses. By maintaining a positive profit margin, the controlling processes can be further optimized. If the profit margin is negative, appropriate, or timely, corrective measures can be initiated.
6: Staying on top of regulation
For a company that operates internationally (E.g. it markets IT products and services abroad), it is extremely important that it stays on top of country-specific compliance and adheres to varying international tax rules. To do so correctly it is necessary to provide correct transfer price documentation. This requires three factors:
- Transparent analysis and calculation of IT services based on the value chain
- Evaluation of the services used and the associated billing processes
- Access to the management of service contracts between providers and consumers as the legal basis for IT services.
7: Stay competitive
Closely linked to the profit mentioned in step five is the question of how to price IT services in order to stay competitive whilst avoiding losses. This begins with benchmark data which can be researched or determined using existing ITFM solutions that can automatically extract them from different – interconnected – databases. From there, a unit cost calculation can be used to define exactly and effectively what individual IT services – and their preliminary products – cost. This allows organizations to easily compare internal unit cost calculations with the benchmarks and competitor prices, before making pricing decisions.
8: Identify and maintain key cost drivers
Another aspect of IT cost control that is streamlined via the comprehensive assessment of the cost-to-service flow is the identification and management of main IT cost drivers. A properly modelled value chain makes it clear which IT services or associated preliminary products and cost centres incur the greatest costs and why. This analysis allows for concise adjustment to expenditure and helps to avoid misunderstandings about cost drivers. Using this as a basis, strategies can be developed to reduce IT costs effectively and determine a better use of expensive resources.
9: Showback/Chargeback IT costs
By controlling IT costs using the value chain, efficient usage-based billing and invoicing of IT services and products can be achieved. If IT costs are visualized transparently, they can easily be assigned to IT customers, therefore increasing the clarity of the billing process, and providing opportunities to analyze the value of IT in more detail. When informing managers and users about their consumption there are two options: either through the ‘showback’ process – highlighting the costs generated and how they are incurred – or through the ‘chargeback’ process, in which costs incurred are sent directly to customers and subcontractors.
10: Analyse supply vs. demand
By following the processes above, transparency regarding IT cost control is further extended and discussions around the value of IT services are made possible across the organization. A more holistic analysis of IT service consumption allows conclusions to be drawn promptly to enable the optimization of supply and demand for IT services in various business areas. This, in turn, will enable a more comprehensive value analysis and optimization of IT service utilization.
Following these 10 cost management steps, a secure, transparent, and sustainable IT cost control environment can be developed, resulting in fully optimized budgets and in turn – significant cost savings. Cost-cutting aside, automating the financial management process in such an environment can boost productivity substantially freeing up time to focus on valuable work, thus leading to overall business growth.
The business and economic landscape is full of uncertainty right now, but business leaders can regain control via cost management, not only to weather current storms but to set themselves up for success beyond today’s turbulence.
Mortgage digitalization: How mortgage lenders are automating the lending process
Source: Finance Derivative
By Fernando Zandona, Chief Product and Technology Officer at Mambu
The mortgage market has a long history, but its future is digital. As tech capabilities grow and consumer expectations evolve, mortgage providers are increasingly turning to digital solutions to attract and retain customers and streamline the lending process. According to research from the 2022 Celent Origination Study, over half of banks and 75% of building societies expect to make significant changes to their mortgage origination systems within 24 months. So, how is the mortgage industry transforming and what must lenders do to future-proof their business?
The acceleration of digitalisation in mortgage lending
There are several factors that have accelerated the digitalisation of mortgage lending. One is changes to consumer behaviour: customers have come to expect smooth digital experiences across all areas of their life (accelerated by the pandemic). As such, they seek similar ease, speed and efficiency when it comes to home buying.
Then there’s the arrival of fintechs. Newer fintechs are beginning to enter the mortgage sector – often through acquisitions, such as Starling Bank’s acquisition of Fleet Mortgage or Zoopla acquiring YourKeys. They are also bringing with them innovative digital solutions, which raise the bar for the whole industry. At the same time, regulatory changes are helping accelerate and facilitate digitalisation, such as the Bank of England’s decision to withdraw its affordability test recommendation and cut some of the red tape around mortgage lending, and HM Land Registry’s acceptance of electronic signatures. The combination of these forces have played a significant role in accelerating the lending process and making it more efficient.
Today’s financial institutions are offering a wide range of digital options, through online and mobile platforms, to their mortgage customers. Services include easier ways for customers to access and manage their mortgages, schedule a session with a mortgage advisor, find personalised recommendations, and access improved security measures to protect sensitive customer information.
That’s not to mention the embrace of open banking has enabled seamless integration of customer data into the lending process. This innovation is helping reduce the number of steps needed to collect data and resulting in faster processing times, less rekeying of information and lower origination costs. Offering faster, cheaper loan decisions is a crucial advantage in an increasingly-crowded mortgage market and automated processes reduce teams’ manual work and eliminate costly human errors.
Digitalising in the right way
The success of these new products and processes relies on the way mortgage lenders introduce and configure them. Agility is key – lenders need to prioritise configurability and scalability when building new products and choosing technology partners, as they must be able to quickly launch new features or make adjustments, in line with evolving customer expectations, emerging trends and changing industry regulations. The use of software-as-a-service (SaaS) platforms and application programming interface (API) integrations helps with this, allowing for faster feature launches and less internal friction.
APIs are just part of future-proofing the mortgage market. According to Forbes, 55% of senior executives in the US mortgage industry think that AI will make their firm, and the industry overall, more competitive. AI and machine learning can assist lenders in analysing data more quickly, leading to more efficient decision-making and forecasting, although as with all AI applications, providers must be vigilant about encoded bias that can radically increase discrimination.
The mortgage landscape is transforming through digitalisation, and this is bound to continue. Lenders who want to keep up the pace with this change – and reap the benefits of faster, smoother processes as well as keep satisfied, loyal customers – will be future-proofing their processes through lending automation and putting customer ease at the centre of their offering.