Source: Finance Derivative
Martin Lukac, Associate Professor from Nazarbayev University School of Engineering and Digital Sciences
A quantum computer is a device that performs quantum computations, harnessing the power of atomic and subatomic particles to perform high speed parallel computing.
Conceptually introduced by Richard Feynman in the 1980s as a method for solving instances of the many-body problem, it was not until recently that quantum computing became widely known to the public. The many-body problem is a general name for a large category of physical problems represented by systems of microscopic interacting particles.
When compared to other technology candidates designed to tackle the heat dissipation and the Moore’s limit of the current transistor-based computers, such as DNA computing, 3D transistor or carbon nano-tube, quantum computing has several advantages not available to these “more classical” technologies.
These advantages can be described by four basic postulates defining the principles and possibilities of quantum computing.
The first postulate regards information representation. Classical information in digital computers is represented by logical binary digits (bits). A logical bit can take the value of 1 or 0 depending on whether the voltage in the wire of a logic circuit is High or Low; think of the classic binary coding of 1s and 0s. In contrast, a quantum bit (qubit) is represented by a quantum state described by a wave equation: with and being complex numbers subject to . The quantum state specified by this wave equation is a point on the surface of something called a Bloch sphere.
The second postulate expands on the idea of quantum states: when multiple qubits are used together the space of their states expands exponentially. This means that a set of qubits can represent in the superposition all the combinations of the basis states.
The third postulate specifies that qubits and their states are manipulated using a set of unitary matrix operators: square matrices that are self-inverse. These matrix operators turn the qubit state along the three axes of the Bloch sphere.
The final postulate indicates that quantum information exists until it is not observed. This means that a quantum state of a qubit can contain both of the basis states or at the same time but when one wants to read the quantum state, the result will be either or .
The advantages of quantum computing were first demonstrated by David Deutsch’s algorithm, followed by the Deutsch-Jozsa algorithm, which demonstrated that quantum computers can answer in a single computational step the question of whether a given function is balanced or constant. A balanced function has half of its outputs 0(1) and a constant function has all outputs 0(1). For a classical computer to determine the answer to this question, it would need to examine at least half of the outputs of a given function. If more than half of the outputs are the same, the function is constant; if not, it is balanced.
Further developments in the nineties, and leading up to today, popularized quantum computing further. Peter Shor’s algorithm for exponentially accelerating integer factorization, Lov Grover’s quadratically accelerating search in un-ordered database, and recently the demonstration of quantum supremacy has made quantum computing very attractive for wider research and the investor community.
While Shor’s algorithm is one of the main motivations behind large governmental funding to quantum computing (think exponentially accelerating decryption of current encryption standards), Grover’s algorithm’s wider application spurred a large number of search acceleration optimization. Finally, the demonstration of quantum supremacy showed that it is indeed possible to construct quantum computers that perform much faster than any classical computer.
There are two main reasons behind the difficulty for quantum computers to break into the mainstream. Firstly, quantum computing computes in the quantum space. This implies that classical inputs have to be prepared (made quantum before processing) and quantum outputs of the computation have to be measured to be made classical and, therefore, available for further processing. This severely limits the amount of information that can be extracted from the quantum states which, in turn, limits the possible acceleration of computing using quantum computers.
Secondly, quantum states require an almost perfect vacuum, near-absolute zero temperature, and they do not like to remain in the desired quantum state due to decoherence, which means qubits interacting with the environment lose information. These issues are being gradually solved by progress in material science and by improving control protocols of quantum operations.
Near-future applications are already visible in the form of quantum security, quantum communication, quantum cryptography and large-scale quantum computation. Quantum computing has the potential to solve many of the current big data problems by accelerating the processing and storing it on even a denser space. Quantum supercomputers will be the first to appear within the next 10 years.
Building a Greener Web: Six Way to Put Your Website on an Emissions Diet
By Roberta Haseleu, Practice Lead Green Technology at Reply, Fiorenza Oppici, Live Reply, and Lars Trebing, Vanilla Reply
Most people are unaware or underestimate the impact of the IT sector on the environment. According to the BBC: “If we were to rather crudely divide the 1.7 billion tonnes of greenhouse gas emissions estimated to be produced in the manufacture and running of digital technologies between all internet users around the world, it would mean each of us is responsible for 414kg of carbon dioxide a year.” That’s equivalent to 4.7bn people charging their smartphone 50,000 times.
Every web page produces a carbon footprint that varies depending on its design and development. This must be more closely considered as building an energy efficient website also increases loading speeds which leads to better performance and user experience.
Following are six practical steps developers can take to reduce the environmental impact of their websites.
- Implement modularisation
With traditional websites that don’t rely on single page apps, each page and view of the site is saved in individual html files. The code only runs, and the data is only downloaded, for the page that the user is visiting, avoiding unnecessary requests. This reduces transmitted data volume and saves energy.
However, this principle is no longer the standard in modern web design which is dominated by single page apps which dynamically display all content to the user at runtime. This approach is easier and faster to code and more user-friendly but, without any precautions, it creates unnecessary overheads. In the worst case, accessing the homepage of a website may trigger the transmission of the entire code of the application, including parts that may not be needed.
Modularisation can help. By dividing the code of a website into different modules, i.e. coherent code sections, only the relevant code is referenced. Using modules offers distinct benefits: they keep the scope of the app clean and prevent ‘scope creeps’; they are loaded automatically after the page has been parsed but before the Document Object Model (DOM) is rendered; and, most importantly for green design, they facilitate ‘lazy loading’.
- Adopt lazy loading
The term lazy loading describes a strategy of only loading resources at the moment they are needed. This way, a large image at the bottom of the page will not be loaded unless the user scrolls down to that section.
If a website only consists of a routing module and an app module which contain all views, the site will become very heavy and slow at first load. Smart modularisation, breaking down the site into smaller parts, in combination with lazy loading can help to load only the relevant content when the user is viewing that part of the page.
However, this should not be exaggerated either as, in some instances, loading each resource only in the last moment while scrolling can annihilate performance gains and result in higher server and network loads. It’s important to find the right balance based on a good understanding of how the app will be used in real life (e.g. whether users will generally rather continue to the next page after a quick first glance, or scroll all the way down before moving on).
- Monitor build size
Pre-processors come with the possibility to prevent a build to complete if its files are bigger than a variable threshold. Limits can be set both for the main boot script as well as the single chunks of CSS to be no bigger than a specific byte size after compilation. Any build surpassing those thresholds fails with a warning.
If a build is suspiciously big, a web designer can inspect it and identify which module contributes the most, as well as all its interdependencies. This information allows the programmer to optimise the parts of the websites in question.
- Eliminate unused code
One potential reason for excessive build sizes can be dozens of configuration files and code meant for scenarios that are never needed. Despite never being executed, this code still takes up bandwidth, thereby consuming extra energy.
Unused parts can be found in own source code but also (and often to a greater extent) in external libraries used as dependencies. Luckily, a technique called ‘tree shaking’ can be used to analyse the code and mark which parts are not referenced by other portions of the code.
Modern pre-processors perform ‘tree shaking’ to identify unused code but also to exclude it automatically from the build. This allows them to package only those parts of the code that are needed at runtime – but only if the code is modularised.
- Choose external libraries wisely
One common approach to speed up the development process is by using external libraries. They provide ready-to-use utilities written and tested by other people. However, some of these libraries can be unexpectedly heavy and weigh your code down.
One popular example is Moment.js, a very versatile legacy library for handling international date formats and time zones. Unfortunately, it is also quite big in size. Most of all, it is neither very compatible with the typical TypeScript world nor is it modular. This way, also the best pre-processors cannot reduce the weight that it adds to the code by means of ‘tree shaking’.
- Optimise content
Designs can also be optimised by avoiding excessive use of images and video material. Massive use of animation gimmicks such as parallax scrolling also has a negative effect. Depending on the implementation, such animations can massively increase the CPU and GPU load on the client. To test this, consider running the website on a 5 to 10-year-old computer. If scrolling is not smooth and/or the fans jump to maximum speed, this is a very good indication of optimisation potential.
The amount of energy that a website consumes — and thus its carbon footprint — depends, among other factors, on the amount of data that needs to be transmitted to display the requested content to users. By leveraging the six outlined techniques above, web designers can ‘slim’ their websites and contribute to the creation of a more sustainable web whilst boosting performance and user experience in the process.
The Role of Software Development in Shaping the FinTech Industry in 2023 and Beyond
Source: Finance Derivative
Paul Blowers, Commercial Director at Future Processing
As another year passes, now is the time for company leaders to look back at the last 12 months and consider what’s in store for their FinTech businesses in 2023. One of the biggest impacts of last year was undoubtedly the cost of living crisis and increasing interest rates, leading to UK FinTech investment dropping to $9.6 billion in the first half of 2022 – down from $27.8 billion in the same period in 2021. Whilst these challenges remain at the forefront of the industry, there are plenty of innovative developments and technologies evolving in the FinTech space right now that will continue the pace of change. It’s vital for organisations to keep abreast of these trends, to ensure they can remain competitive and continue providing customers with the highest quality products and services.
Innovations in FinTech
In recent years, we have seen larger banks begin to invest more heavily in BaaS (banking as a service). BaaS is a start-to-finish process that digital banks and third parties use to connect their own business infrastructure to a bank’s system via APIs. This allows digital banks or third parties to offer full-banking services directly through their non-bank business offerings. Typically, BaaS is associated with smaller banks due to the favourable interchange rates under $10 billion (in assets) that these banks have. With a bigger focus on commercial BaaS efforts, we can predict seeing more vertical partnerships with SaaS providers who already have existing relationships with businesses.
An alternative to providing BaaS is to pursue an embedded FinTech strategy. Embedded FinTech refers to the integration of FinTech products and services into financial institutions’ websites, mobile apps, and business processes. This has been growing at pace since the COVID pandemic and is expected to continue on its upward trajectory, accelerating eCommerce, financial digitalisation and consumer expectations. As a result, we can expect that more platforms will be diversifying their service offerings as they deepen their relationships with small business customers.
Another topic that has been circulating in the FinTech sector is the intervention of AI and chatbots. 2023 is set to be the year that this technology fully takes off and integrates with mainstream banks and FinTech. Chatbots can be defined as rule-based systems which can perform routine tasks with general FAQs. The primary goal of these AI-drive chatbots is to provide human-like support for customers, communicating with them, introducing services, answering their questions and receiving any complaints.
Software Development for FinTechs
As banks continue to invest in new technologies and leverage the benefits of adopting BaaS, embedded finance and AI, the focus on software development services also increases. Software is at the heart of every FinTech business, as each product or service demands a high-quality implementation, from both existing and potential customers. One of the biggest expectations is around user experience, as FinTech leaders aim to provide a straightforward, transparent and concise solution to their customer’s business problems. Additionally, security can not be underestimated with the FinTech industry under constant risk of cyberattacks and breaches. With exceptional software development, FinTech solutions can both comply with strict security and data encryption standards, whilst offering a polished and streamlined user experience for customers.
The finance industry also comes up against a constant stream of industry regulations, meaning a compliance strategy must be a priority for FinTech’s when considering their software development approach. This links to checking and implementing updates for frameworks and software architectures regularly to ensure app responsiveness, security and performance remain at the forefront. Ultimately, great software increases a FinTech’s opportunity to leverage emerging technologies and keep control over the quality of its service. Timely identification of key trends makes it possible to maximise the digitalisation of finance to drive long-term value for FinTech businesses and their customers.
The Future of FinTech
Whilst technological developments have been major drivers of FinTech innovation, now is the time to further digitise financial services and the banking sector to build a more inclusive and efficient industry that promotes economic growth. FinTech’s are stepping up to lead, navigate and disrupt the industry during this time of uncertainty, and software development will play a vital role in shaping the future landscape. With the help of software development, FinTech’s will build capabilities and applications that can be easily integrated into the environments where customers are already engaged, meeting their changing needs, new business goals and regulatory demands.
Will cyberattacks be uninsurable in 2023? Three steps that financial organisations can follow now
Source: Finance Derivative
By James Blake, Field CISO of EMEA, Cohesity
The growing number of cyber attacks and subsequent damage has led to an increasing demand for cyber insurance. Swiss Re Insurance expects total premiums paid to more than double from $10 billion from 2020 to $23 billion by 2025. But this is being questioned by both insurance companies and by customers: is insurance effective, is it feasible, what does it cover and what does it enable? The CEO of Zurich Insurance, Mario Greco, said in an interview with the Financial Times recently that cyberattacks will soon become “uninsurable”. Indeed, insurance and prevention have both proved ineffective in stopping cyberattacks like ransomware or in enabling organisations to recover afterwards. Instead organisations must shift their focus onto recovery, What can companies do to meet this challenge? James Blake, Field CISO of EMEA at data management and security provider Cohesity, has three recommendations.
More than 400 million US dollars – that’s how much damage the data leak at Capital One caused in 2019. And the number of such attacks, which have catastrophic consequences for the companies affected, has continued to increase since then. According to Check Point, in the third quarter of 2022 alone, global attacks increased significantly by 28% compared to the same quarter of the previous year.
Where cyber risk used to be limited to areas such as data breaches and third-party liability, ransomware attacks have shifted the damage to core business and accountability. Cyber insurers had to react to the increased risk and have adjusted their offers, as an analysis by Swiss Re Insurance shows. According to PWC, from the insurer perspective, the fast-increasing frequency of ransomware attacks (and the growing associated impacts and ransom demands) and business interruption claims has resulted in cyber becoming a less profitable area of insurance in recent times. The situation has stabilised over the past year as customers have had to pay higher premiums and meet stricter terms and conditions. Swiss Re Insurance expects total premiums paid to more than double from $10 billion to $23 billion by 2025.
More expensive and more difficult to qualify
This is bad news for the financial industry, as insurers are becoming stricter and asking for higher premiums. Cohesity’s legal experts looked at the leading ransomware insurance policies on the market at the end of 2022 and found that ultimately, such guarantees are little more than thinly veiled limitations of liability that benefit the providers – not the customers.
However, there are some measures that companies can use to protect themselves effectively in this new market situation:
- The 3-2-1 strategy remains current: keep an isolated copy of the data
In some cases, organisations are required to quarantine an offsite copy of their production records as part of a 3-2-1 strategy to qualify for cyber insurance.
To do this, they can use a SaaS service which keeps an encrypted copy of the production data in the cloud, isolated by a virtual air gap. The data stored there is monitored with multi-layered security functions and machine learning, and anomalies are reported immediately.
- Tear down silos and merge data with zero-trust in mind
In general, financial organisations should consolidate all their distributed data on a scalable data management platform and ensure they can backup their data across all their infrastructure and assets. Furthermore, the data must be protected in a zero trust model, where the data is encrypted during transfer and on this storage, access is strictly regulated with rules and multi-factor authentication. In addition, all data stored in it can be managed according to compliance requirements and, thanks to immutable storage, is better protected against ransomware.
- Improve collaboration between IT and SecOps teams for cyber resiliency
In addition to these technical measures, financial organisations should optimise the collaboration between their IT and security teams and adopt a data-centric focus on cyber resilience. For too long, many security teams have focused primarily on preventing cyberattacks while IT teams have focused on protecting data including backup and recovery.
A comprehensive data security strategy must unite these two worlds and IT and SecOps teams must work together before the attack takes place. Both teams should be guided by the NIST framework. This holistic approach defines five core disciplines: Identify, Protect, Detect, Respond and Recover.
If a financial company can demonstrate such a mature data security strategy, this will not only have a positive effect on insurance cover, but will generally reduce the risk of incidents and possible consequential damage through failure or data loss.