Connect with us

Technology

Basics of Quantum Computing

Source: Finance Derivative

Martin Lukac, Associate Professor from Nazarbayev University School of Engineering and Digital Sciences

A quantum computer is a device that performs quantum computations, harnessing the power of atomic and subatomic particles to perform high speed parallel computing.

Conceptually introduced by Richard Feynman in the 1980s as a method for solving instances of the many-body problem, it was not until recently that quantum computing became widely known to the public. The many-body problem is a general name for a large category of physical problems represented by systems of microscopic interacting particles.

Martin Lukac

When compared to other technology candidates designed to tackle the heat dissipation and the Moore’s limit of the current transistor-based computers, such as DNA computing, 3D transistor or carbon nano-tube, quantum computing has several advantages not available to these “more classical” technologies.

These advantages can be described by four basic postulates defining the principles and possibilities of quantum computing.

The first postulate regards information representation. Classical information in digital computers is represented by logical binary digits (bits). A logical bit can take the value of 1 or 0 depending on whether the voltage in the wire of a logic circuit is High or Low; think of the classic binary coding of 1s and 0s. In contrast, a quantum bit (qubit) is represented by a quantum state described by a wave equation:   with  and  being complex numbers subject to . The quantum state specified by this wave equation is a point on the surface of something called a Bloch sphere.

The second postulate expands on the idea of quantum states: when multiple qubits are used together the space of their states expands exponentially. This means that a set of qubits can represent in the superposition all the combinations of the basis states.

The third postulate specifies that qubits and their states are manipulated using a set of unitary matrix operators: square matrices that are self-inverse. These matrix operators turn the qubit state along the three axes of the Bloch sphere.

The final postulate indicates that quantum information exists until it is not observed. This means that a quantum state of a qubit can contain both of the basis states   or  at the same time but when one wants to read the quantum state, the result will be either  or .

The advantages of quantum computing were first demonstrated by David Deutsch’s algorithm, followed by the Deutsch-Jozsa algorithm, which demonstrated that quantum computers can answer in a single computational step the question of whether a given function is balanced or constant. A balanced function has half of its outputs 0(1) and a constant function has all outputs 0(1). For a classical computer to determine the answer to this question, it would need to examine at least half of the outputs of a given function. If more than half of the outputs are the same, the function is constant; if not, it is balanced.

Further developments in the nineties, and leading up to today, popularized quantum computing further. Peter Shor’s algorithm for exponentially accelerating integer factorization, Lov Grover’s quadratically accelerating search in un-ordered database, and recently the demonstration of quantum supremacy has made quantum computing very attractive for wider research and the investor community.

While Shor’s algorithm is one of the main motivations behind large governmental funding to quantum computing (think exponentially accelerating decryption of current encryption standards), Grover’s algorithm’s wider application spurred a large number of search acceleration optimization. Finally, the demonstration of quantum supremacy showed that it is indeed possible to construct quantum computers that perform much faster than any classical computer.

There are two main reasons behind the difficulty for quantum computers to break into the mainstream. Firstly, quantum computing computes in the quantum space. This implies that classical inputs have to be prepared (made quantum before processing) and quantum outputs of the computation have to be measured to be made classical and, therefore, available for further processing. This severely limits the amount of information that can be extracted from the quantum states which, in turn, limits the possible acceleration of computing using quantum computers.

Secondly, quantum states require an almost perfect vacuum, near-absolute zero temperature, and they do not like to remain in the desired quantum state due to decoherence, which means qubits interacting with the environment lose information. These issues are being gradually solved by progress in material science and by improving control protocols of quantum operations.

Near-future applications are already visible in the form of quantum security, quantum communication, quantum cryptography and large-scale quantum computation. Quantum computing has the potential to solve many of the current big data problems by accelerating the processing and storing it on even a denser space. Quantum supercomputers will be the first to appear within the next 10 years.

Continue Reading
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Business

How can businesses make the cloud optional in their operations?

Max Alexander, Co-founder at Ditto

Modern business apps are built to be cloud-dependent. This is great for accessing limitless compute and data storage capabilities but when connection to the cloud is poor or shuts down, business apps stop working, impacting revenue and service. If real-time data is needed for quick decision-making in fields like healthcare, a stalled app can potentially put people in life-threatening situations.

Organisations in sectors as diverse as airlines, fast food retail, and ecommerce that have deskless staff who need digital tools accessible on smartphones, tablets and other devices to do their jobs. But because of widespread connectivity issues and outages, these organisations are beginning to consider how to ensure these tools can operate reliably when the cloud is not accessible. 

The short answer is that building applications with a local-first architecture can help to ensure that they remain functional when disconnected from the internet. But then, why are not all apps built this way? The simple answer is that building and deploying cloud-only applications is much easier as ready-made tools for developers help expedite a lot of the backend building process. The more complex answer is that a local-first architecture solves the issue of offline data accessibility but does not solve the critical issue of offline data synchronisation. Apps disconnected from the internet still have no way to share data across devices. That is where peer-to-peer data sync and mesh networking come into play.

Combining offline-first architecture with peer-to-peer data sync

In the real world, what does an application like this look like?

  • Apps must prioritise local data sync. Rather than sending data to a remote server, applications must be able to write data using its local database in the first instance, and then listen for changes from other devices, and recombine them as needed. Apps should utilise local transports such as Bluetooth Low Energy (BLE) and Peer-to-Peer WiFi (P2P Wi-Fi) to communicate data changes in the event that the internet, local server, or the cloud is not available.
  • Devices are capable of creating real-time mesh networks. Nearby devices should be able to discover, communicate, and maintain constant connections with devices in areas of limited or no connectivity.
  • Seamlessly transition from online to offline (and vice versa). Combining local sync with mesh networking means that devices in the same mesh are constantly updating a local version of the database and opportunistically syncing those changes with the cloud when it is available.
  • Partitioned between large peer and small peer mesh networks to not overwhelm smaller networks if they try to sync every piece of data. In order to do this, smaller networks will only sync the data that it requests, so developers have complete control over bandwidth usage and storage. This is vital when connectivity is erratic or critical data needs prioritising. Whereas, the larger networks sync as much data as they can, which is when there is full access to cloud-based systems.
  • Ad-hoc to enable devices to join and leave the mesh when they need to. This also means that there can be no central server other devices are relying on.
  • Compatible with all data at any time. All devices should account for incoming data with different schemas. In this way, if a device is offline and running an outdated app version, for example, it still must be able to read new data and sync.

Peer-to-peer sync and mesh networking in practice

Let us take a look at a point-of-sale application in the fast-paced environment of a quick-service restaurant. When an order is taken at a kiosk or counter, that data must travel hundreds of miles to a data centre to arrive at a device four metres away in the kitchen. This is an inefficient process and can slow down or even halt operations, especially if there is an internet outage or any issues with the cloud.

A major fast-food restaurant in the US has already modernised its point of sale system using this new architecture and created one that can move order data between store devices independently of an internet connection. As such, this system is much more resilient in the face of outages, ensuring employees can always deliver best-in-class service, regardless of internet connectivity.

The vast power of cloud-optional computing is showcased in healthcare situations in rural areas in developing countries. By using both peer-to-peer data sync and mesh networking, essential healthcare applications can share critical health information without the Internet or a connection to the cloud. This means that healthcare workers in disconnected environments can now quickly process information and share it with relevant colleagues, empowering faster reaction times that can save lives.

Although the shift from cloud-only to cloud-optional is subtle and will not be obvious to end users, it really is a fundamental paradigm shift. This move provides a number of business opportunities for increasing revenue and efficiencies and helps ensure sustained service for customers.

Continue Reading

Business

How 5G is enhancing communication in critical sectors

Luke Wilkinson, MD, Mobile Tornado

In critical sectors where high-stakes situations are common, effective communication is non-negotiable. Whether it’s first responders dealing with a crisis or a construction team coordinating a complex project, the ability to share information quickly and reliably can mean the difference between success and failure.

Long-distance communication became feasible in the 1950s when wireless network connectivity was first utilised in mobile radio-telephone systems, often using push-to-talk (PTT) technology. As private companies invested in cellular infrastructure, the networks developed and data speeds improved increasingly. Each major leap forward in mobile network capabilities was classed as a different generation and thus 1G, 2G, 3G, 4G, and now 5G were born.

5G is the fifth generation of wireless technology and has been gradually rolled out since 2019 when the first commercial 5G network was launched. Since then, the deployment of 5G infrastructure has been steadily increasing, with more and more countries and regions around the world adopting this cutting-edge technology.

Its rollout has been particularly significant for critical sectors that rely heavily on push-to-talk over cellular (PTToC) solutions. With 5G, PTToC communications can be carried out with higher bandwidth and speed, resulting in clearer and more seamless conversations, helping to mitigate risks in difficult scenarios within critical sectors.

How is 5G benefiting businesses?

According to Statista, by 2030, half of all connections worldwide are predicted to use 5G technology, increasing from one-tenth in 2022. This showcases the rapid pace at which 5G is becoming the standard in global communication infrastructure.

But what does this mean for businesses? Two of the key improvements under 5G are improved bandwidth and download speeds, facilitating faster and more reliable communication within teams. PTToC solutions can harness the capabilities of 5G and bring the benefits to critical sectors that need it most, whether that’s in public safety, security, or logistics: the use cases are infinite. For example, this could be leveraging 5G’s increased bandwidth to enable larger group calls and screen sharing for effective communication.

Communication between workers in critical industries can be difficult, as often the workforces are made up of lone workers or small groups of individuals in remote locations. PTToC is indispensable in these scenarios for producing quick and secure communication, as well as additional features including real-time location information and the ability to send SOS alerts. PTToC with 5G works effectively in critical sectors, as 5G is designed to be compatible with various network conditions, including 2G and 3G. This ensures that communication remains reliable and efficient even in countries or areas where 5G infrastructure is not fully deployed to keep remote, lone workers safe and secure.

The impact of 5G on critical communications

The International Telecommunication Union has reported that 95 percent of the world’s population can access a mobile broadband network. This opens up a world of new possibilities for PTToC, particularly when harnessing new capabilities for 5G as it’s being rolled out.

One of the most significant improvements brought by 5G is within video communications, which most PTToC solutions now offer. Faster speeds, higher bandwidth, and lower latency enhance the stability and quality of video calls, which are crucial in critical sectors. After all, in industries like public safety, construction, and logistics, the importance of visual information for effective decision-making and situational awareness cannot be overstated. 5G enables the real-time transmission of high-quality video, allowing for effective coordination and response strategies, ultimately improving operational outcomes and safety measures.

Challenges in Adopting 5G in Critical Sectors

While the benefits of 5G are undeniable, the industry faces some challenges in its widespread adoption. Network coverage and interoperability are two key concerns that need to be addressed to ensure communication can keep improving in critical sectors.

According to the International Telecommunication Union, older-generation networks are being phased out in many countries to allow for collaborative 5G standards development across industries. Yet, particularly in lower-income countries in Sub-Saharan Africa, Latin America, and Asia-Pacific, there is a need for infrastructure upgrades and investment to support 5G connectivity. The potential barriers to adoption, including device accessibility, the expense of deploying the new networks, and regulatory issues, must be carefully navigated to help countries make the most out of 5G capabilities within critical sectors and beyond.

However, the rollout of 5G does cause data security concerns for mission-critical communications and operations, as mobile networks present an expanded attack surface. Nonetheless, IT professionals, including PTToC developers, have the means to safeguard remote and lone workers and shield corporate and employee data. Encryption, authentication, remote access, and offline functionality are vital attributes that tackle emerging data threats both on devices and during transmission. Deploying this multi-tiered strategy alongside regular updates substantially diminishes the vulnerabilities associated with exploiting 5G mobile networks and devices within critical sectors.

While the challenges faced by the industry must be addressed, the potential benefits of 5G in enhancing communication and collaboration are undeniable. As the rollout of 5G continues to gain momentum, the benefits of this cutting-edge technology in enhancing communication in critical sectors are becoming increasingly evident. The faster, more reliable, and efficient communication enabled by 5G is crucial for industries that rely on real-time information exchange and decision-making.

Looking ahead, the potential for further advancements and increased adoption of 5G in critical sectors is truly exciting. As the industry continues to address the challenges faced, such as network coverage, interoperability, and data security concerns, we can expect to see even greater integration of this technology across a wide range of mission-critical applications for critical sectors.

Continue Reading

Auto

Could electric vehicles be the answer to energy flexibility?

Rolf Bienert, Managing and Technical Director, OpenADR Alliance

Last year, what was the Department for Business, Energy & Industrial Strategy and Ofgem published its Electric Vehicle Smart Charging Action plans to unlock the power of electric vehicle (EV) charging. Owners would have the opportunity to charge their vehicles while powering their homes with excess electricity stored in their car.

Known as vehicle to grid (V2G) or vehicle to everything (V2X), it is the communication between a vehicle and another entity. This could be the transfer of electricity stored in an EV to the home, the grid, or to other destinations. V2X requires bi-directional energy flow from the charger to the vehicle and bi- or unidirectional flow from the charger to the destination, depending on how it is being used.

While there are V2X pilots already out there, it’s considered an emerging technology. The Government is backing it with its V2X Innovation Programme with the aim of addressing barriers to enabling energy flexibility from EV charging. Phase 1 will support development of V2X bi-directional charging prototype hardware, software or business models, while phase 2 will support small scale V2X demonstrations.

The programme is part of the Flexibility Innovation Programme which looks to enable large-scale widespread electricity system flexibility through smart, flexible, secure, and accessible technologies – and will fund innovation across a range of key smart energy applications.

As part of the initiative, the Government will also fund Demand Side Response (DSR) projects activated through both the Innovation Programme and its Interoperable Demand Side Response Programme (IDSR) designed to support innovation and design of IDSR systems. DSR and energy flexibility is becoming increasingly important as demand for energy grows.

The EV potential

EVs offer a potential energy resource, especially at peak times when the electricity grid is under pressure. Designed to power cars weighing two tonnes or more, EV batteries are large, especially when compared to other potential energy resources.

While a typical solar system for the home is around 10kWh, electric car batteries range from 30kWh or more. A Jaguar i-Pace is 85kWh while the Tesla model S has a 100kWh battery, which offers a much larger resource. This means that a fully powered EV could support an average home for several days.

But to make this a reality the technology needs to be in place first to ensure there is a stable, reliable and secure supply of power. Most EV charging systems are already connected via apps and control platforms with pre-set systems, so easy to access and easy to use. But, owners will need to factor in possible additional hardware costs, including invertors for charging and discharging the power.

The vehicle owner must also have control over what they want to do. For example, how much of the charge from the car battery they want to make available to the grid and how much they want to leave in the vehicle.

The concept of bi-directional charging means that vehicles need to be designed with bi-directional power flow in mind and Electric Vehicle Supply Equipment will have to be upgraded as Electric Vehicle Power Exchange Equipment (EVPE).

Critical success factors

Open standards will be also critical to the success of this opportunity, and to ensure the charging infrastructure for V2X and V2G use cases is fit for purpose.

There are also lifecycle implications for the battery that need to be addressed as bi-directional charging can lead to degradation and shortening of battery life. Typically EVs are sold with an eight-year battery life, but this depends on the model, so drivers might be reluctant to add extra wear and tear, or pay for new batteries before time.

There is also the question of power quality. With more and more high-powered invertors pushing power into the grid, it could lead to questions about power quality that is not up to standard, and that may require periodic grid code adjustments.

But before this becomes reality, it has to be something that EV owners want. The industry is looking to educate users about the benefits and opportunities of V2X, but is it enough? We need a unified message, from automotive companies and OEMs, to government, and a concerted effort to promote new smart energy initiatives.

While plans are not yet agreed with regards to a ban on the sale on new petrol and diesel vehicles, figures from the IEA show that by 2035, one in four vehicles on the road will be electric. So, it’s time to raise awareness the opportunities of these programs.

With trials already happening in the UK, US, and other markets, I’m optimistic that it could become a disruptor market for this technology.

Continue Reading

Copyright © 2021 Futures Parity.