Menu

Quantum-computing related developments

On this page we post about interesting quantum-computing related research and news which we are following.

Quantum resources

Compilation of Fault-Tolerant Quantum Heuristics for Combinatorial Optimization

Various heuristic quantum optimization approaches have been suggested to solve combinatorial optimization problems, since the NP hardness of such problems makes heuristics the only viable option for many problems that need to be routinely solved in real-world applications. Usually the performance of the heuristic approach is examined; however, an equally important aspect is their application to current hardware. During the NISQ era, hardware with only a limited number of qubits is available and quantum error correction cannot be fully exploited. Furthermore, the error rates of the quantum circuitry is still high (10-3 – 10-4), which makes the computation inefficient.

This paper reviews existing approaches and develop new methods or improvements of many prominent approaches to combinatorial optimization on a small quantum computer by compiling circuits that implement them and optimizing their fault-tolerant realizations. Many of these methods are bottlenecked by calls to the same subroutines; thus, optimized circuits for those primitives should be of interest regardless of which heuristic is most effective in practice. In essentially all heuristic approaches to quantum optimization there is a primitive that is repeated many times in order to perform the optimization. Instead of investigating how many times those primitives must be repeated, one should focus on the best strategies for realizing those primitives within a fault-tolerant cost model. The goal of this paper is to estimate the performance of an early universal quantum computer for key steps of combinatorial optimization.

These bottlenecks are compiled for several families of optimization problems and it is reported for how long and for what size systems one can perform these heuristics in the surface code given a range of resource budgets. The obtained results discourage the notion that any quantum optimization heuristic realizing only a quadratic speedup will achieve an advantage over classical algorithms on modest superconducting qubit surface code processors without significant improvements in the implementation of the surface code. The essential reason for this is the substantial constant factor slowdown between error-corrected quantum computation and classical computation. Based on these results, we will either need quantum optimization algorithms that afford speedups which are much better than quadratic, or we will need significant improvements in the way that we realize error-correction.

For example, the authors calculate that to implement problems between N = 64 and N = 1024, hundreds of thousands of physical qubits are required when physical gate error rates are on the order of 10-4 and sometimes over a million are required for physical gate error rates on the order of 10-3. Even more concerning is that the number of updates achieved in a day (given realistic cycle times for the error correcting codes) is relatively low, on the order of about ten thousand updates for the smallest instances considered of the cheapest cost functions. With such overheads, these heuristics would need to yield dramatically better improvements in the objective function per step than classical optimization heuristics. Therefore, barring significant advances in the implementation of the surface code (e.g., much faster state distillation), quantum optimization algorithms offering only a quadratic speedup are unlikely to produce any quantum advantage on the first few generations of superconducting qubit surface code processors.

Quantum Approximate Optimization for Hard Problems in Linear Algebra

Quantum Approximate Optimization for Hard Problems in Linear Algebra

One hallmark problem in computational linear algebra is the binary linear least squares (BLLS), which is formally in the NP-Hard complexity class. Efficient classical methods for solving this problem exists with limited approximations to the solution. Quantum computing may solve these problems with a better approximation ratio and/or in a faster runtime scaling. So-far, this problem has only been considered on a quantum annealing by mapping it to a QUBO. In this paper, the problem is solved using a QAOA approach on the gate-based model of quantum computing. The performance is assessed both on a wavefunction simulator, shotnoise simulator and on the 5-qubit IBM cloud computing quantum device ibmq_london. As an outlook: BLLS may serve as a building block for other problems such as Non-negative Binary Matrix Factorization, or clubbed together for a fixed-point approximation of real variables. This paper was partially supervised by Vincent Elfving from Qu & Co.

How many qubits are needed for quantum computational supremacy?

How many qubits are needed for quantum computational supremacy?

Quantum computational supremacy (QCS) arguments have been provided to demonstrate that quantum computers will soon outperform classical computers in a variety of algorithms. However, in order to truly prove supremacy, several strict measures need to be taken: an appropriate algorithm must be selected, a quantum device to run the algorithm must be designed, the ability to verify the results of the calculations must be considered and a complexity theory that supports the claim that a classical computer would be unable to run such an algorithm should be provided. Quantum circuits running on quantum computing chips that are currently experimentally realized might still be able to be simulated in highly parallelized state-of-the-art classical supercomputers, therefore one can only make conjectures about QCS at the moment. Typically, classical simulation of certain families of quantum circuits require scaling that is worse than any polynomial in the size of the circuit, which prevents us from calculating exactly the number of qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers.

In this paper, three refined fine-grained conjectures about quantum supremacy are provided and it is calculated that 208 qubits and 500 gates for Instantaneous Quantum Polynomial-Time (IQP) circuits, 420 qubits and 500 constraints for Approximate Optimization Algorithm (QAOA) circuits and 98 photons and 500 optical elements are sufficient. Although noise in current quantum devices cannot be fully approximated, a lower bound on the runtime of all three algorithms for any multiplicative-error classical simulation is provided.

This paper provides a concrete estimation on the number of qubits required for three algorithms that have gained a lot of attention during the NISQ era. While the orginal work stems from 2018,  the number of qubits required has been recalculated in the newest version of this paper, which provides a good indication of how fidelity of quantum chips has been improved in the last two years, as well as the latest understanding in complexity and the on-going evolution in classical competition.

QAOA

The Quantum Approximate Optimization Algorithm Needs to See the Whole Graph

Quantum Approximate Optimization Algorithm (QAOA) is a variational quantum algorithm that has been heavily investigated due to its potential during the NISQ era. It is designed to find approximate solutions to combinatorial search problems and was first applied to the Max-Cut problem for d-regular graphs. The system is initially prepared in a product state and then p layers of unitaries U(C,γ) and U(B, β) are alternately applied; this can be seen as a Trotterized version of (non-adiabatic) quantum annealing with parametrized annealing pathway.

In these 2 papers by Farhi, Gamarnik & Gutmann posted on April 20 and May 18, the focus problem is to find a large independent set in a random graph of fixed average degree d for the problem of Maximum Independent Set (MIS) on random graphs. Generally, the performance of the QAOA can only improve with depth p, but it is shown that for MIS the algorithm will fail to pass a certain performance barrier if 2p is less than w*log(n)/log(d/ln(2)) for any w <1 with d big enough. The quantum algorithm consists of p unitaries that each respect the locality of the underlying graph. With a fixed average degree of d this means that each qubit typically has an influence sphere of roughly dp other qubits. For qubits further than 2p apart on the graph these influence spheres do not intersect and we can show that measurements of these qubits are uncorrelated, however if p is large enough that dp exceeds n our arguments do not apply and we have no indication that the QAOA will fail.

Overlap Gap Property states that for a given random graph the intersection of any two large independent sets is either big or small, there is no middle ground. Using OGP and the locality of the QAOA, it is shown that if p is less than a d-dependent constant times log n, the QAOA cannot do better than finding an independent set of size .854 times the optimal for d large. Because the logarithm is slowly growing, even at one million qubits we can only show that the algorithm is blocked if p is in single digits. At higher p the algorithm “sees” the whole graph and we have no indication that performance is limited.

The worst case performance circumstances can be easily created to exploit QAOA’s weaknesses. Through construction of operators C and B, QAOA is inherently a local quantum algorithm since when conjugating a single qubit operator produces an operator only involving that qubit and those connected to it on the graph, creating a shallow circuit. This can be exploited to construct examples where the QAOA’s performance is provably below optimal. For example, for Max-Cut when p is a small enough constant times log(n) we show that the approximation ratio is no better than ½ for large d, and for MIS the approximation ratio goes to 0 at large d.

This is an important result for the problem of MIS on random graphs that, although not directly generalizable to other problems, is still valuable for creating bounds for QAOA-applied problems. Knowing that on the average case for MIS, QAOA requires to see the whole graph, therefore require a large p, will change the way QAOA and its potential is viewed for certain problems, and these papers illustrate methods to quantify that.

Quantum-hype during COVID-19

Quantum-hype during COVID-19

At Qu&Co we always restrained ourselves from reacting to exaggerated claims about the short-term potential of quantum-computing. Rather we focused on our scientifically rigorous work to advance the field of quantum as we strongly believe in its long-term potential. However, we draw the line at quantum being pushed as a short-term solution for researchers working on COVID, like this WSJ article in which a quantum hardware manufacturer offers free hardware access to researchers studying COVID, stating ‘we have a fairly unique system that could add value’. Although this offer could be a misplaced April-fools joke, we want to stress that, although quantum has strong long-term potential, there is zero chance it will provide any short-term value for COVID research. Therefore, no serious researchers working on the current pandemic should be distracted by this offer. If you are determined to use novel methods to solve today’s combinatorial optimisation problems, perhaps try simulated annealing on a purpose-built classical processor. And of course, if your time horizon is >2 years and you want to work on collaborative quantum-algorithm R&D, without distracting scarce COVID R&D staff, we are here to help. Stay safe and focused!

Small quantum computers and large classical data sets

Small quantum computers and large classical data sets

Quantum computers are envisioned to have significantly higher computational capabilities compared to their classical counterparts, especially for optimization and machine learning problems that involve a large classical data set. However, existing quantum algorithms use the trivial methods of turning large classical datasets into either quantum oracles or quantum states which are so expensive that negate any possible quantum advantage. Such quantum algorithms focus at problems in which classical runtime scales rapidly with the input size, perhaps exponentially. To be able to achieve quantum speedup with algorithms like Grover search, a “quantum RAM” is proposed, which is a large classical memory that can be queried in superposition. Although quantum RAMs do not yet exist and creating one might encounter the same challenges that quantum computer hardware faces, it has the potential to provide significant speedup to applications like the k-means clustering, logistical regression, zero-sum games and boosting.

This paper introduces hybrid classical-quantum algorithms for problems involving a large classical data set X and a space of models Y such that a quantum computer has superposition access to Y but not X. Then a data reduction technique is used to construct a weighted subset of X called a coreset that yields approximately the same loss for each model. The coreset can be constructed either by a classical computer or by the combination of classical – quantum computer by utilization of quantum measurements.

The main message of this work is that in order to avoid losing quantum speedup for ‘big-data’ applications, techniques such as data reduction are required, so that the time for loading and storing the data set is limited. Also, non-fault tolerant quantum algorithms should be designed in a way that does not require an excessive amount of gates, so that the algorithm can run before qubits lose their coherence and invalidate the result. The goal of the paper is to draw attention to problems that arise from such actions like testing for quantum advantage when data reduction is used, explore general data reduction techniques and investigate new hybrid classical-quantum algorithms.

Quantum simulations of materials on near-term quantum computers

Quantum simulations of materials on near-term quantum computers

Quantum computers were initially proposed to efficiently simulate quantum mechanical systems with an exponential speedup compared to classical computers. We are currently in the noisy intermediate-scale quantum (NISQ) era, which means quantum chips still have a small number of qubits. This prohibits straightforward quantum simulations of realistic molecules and materials, whose description requires hundreds of atoms and thousands to millions of degrees of freedom to represent the electronic wavefunctions. One research direction which attempts to bypass this restriction is the development of hybrid quantum-classical methods where the quantum computation is restricted to a small portion of the system.

In this paper, a quantum embedding theory is proposed for the calculation of strongly-correlated electronic states of active regions, while the rest of the system is described with density functional theory (DFT). DFT (and its various approximations) has been extremely successful in predicting numerous properties of solids, liquids and molecules, and in providing key interpretations to a variety of experimental results, but is often inadequate to describe strongly-correlated electronic state. The novel theory proposed in this paper is built on DFT and does not require the explicit evaluation of virtual electronic states, thus making the method scalable to materials with thousands of electrons. Also, it includes the effect of exchange-correlation interactions of the environment on active regions, thus going beyond commonly adopted approximations in conventional DFT.

The proposed quantum embedding theory utilizes a classical and a quantum algorithm to solve the Hamiltonian that describes the problem and yields results in good agreement with existing experimental measurements and still-tractable computations on classical computing architectures. The theory is tested in various solid-state quantum information technologies, which exhibit strongly-correlated electronic states. In this way, the authors show how the quantum-classical hybrid approach incorporating DFT enables the study of large-scale material systems while adding the strongly-correlated dynamics analysis which the quantum simulation algorithm can provide.

Quantum-computational-chemistry in the restricted Hartree-Fock space

Quantum-computational-chemistry in the restricted Hartree-Fock space

In this arXiv submission by Qu & Co and Covestro, a well-known approximation in classical computational methods for quantum chemistry is applied to a quantum computing scheme for simulating molecular chemistry efficiently on near-term quantum devices. The restricted mapping allows for a polynomial reduction in both the quantum circuit depth and the total number of measurements required, as compared to the conventional variational approaches based on near-term quantum simulation of molecular chemistry, such as UCCSD. This enables faster runtime convergence of the variational algorithm to a potentially higher accuracy by using a larger basis set allowed by the restricted mapping. The latter is shown via an example simulation of the disassociation curve of lithium hydride. These results open up a new direction for efficient near-term quantum chemistry simulation, as well as decreasing the effective quantum resource requirements for future fault-tolerant quantum computing schemes.

Scott Aaronson’s quantum supremacy FAQ

Scott Aaronson’s quantum supremacy FAQ

Recently some contributors to a paper describing a quantum-supremacy experiment inadvertently posted an older version of this paper online, which was quickly picked-up by the popular press resulting in a flurry of (in many cases) unfounded claims about the progress of quantum-computing. We believe that it is important for people interested in this topic to inform themselves through reading a balanced opinion from someone who is an expert in this field. Therefore we kindly refer to Scott Aaronson's excellent blogpost on this matter. 

Quantum computing’s potential impact on the chemical sector

Quantum computing’s potential impact on the chemical sector

In this article by McKinsey & Co, a strategy consulting firm, Florian Budde and Daniel Volz state that the chemical companies must act now to capture the benefits of quantum computing. Of course we at Qu & Co are a bit biased on this topic, but we do agree with the authors that the chemical sector is likely to be an early beneficiary of the vastly expanded modeling and computational capabilities, which is promised to be unlocked by quantum computing.

Page 1 of 9