Displaying items by tag: FTQC algorithms

Qu&Co comments on this publication:

Most near-term quantum-computational chemistry experiments have so-far been implemented by applying the Variational Quantum Eigensolver (VQE) classical-quantum hybrid algorithm as an alternative to Quantum Phase Estimation (QPE). This is due to the fact that QPE requires many orders of magnitude more quantum gates than is feasible with typical coherence times of current and near-term quantum-processors. As an alternative, in this paper, Paesani et al. report experimental results of a recently proposed adaptive Bayesian approach to quantum phase estimation and use it to simulate molecular energies on a Silicon quantum photonic device. The approach is verified to be well suited for NISQ quantum-processors by investigating its superior robustness to noise and decoherence compared to the iterative phase estimation algorithm. There results shows a promising route to unlock the power of QPE much sooner than previously believed possible.

Published in Blog

Qu&Co comments on this publication:

At its core, the detailed understanding and prediction of complex chemical reaction mechanisms, requires highly accurate electronic structure methods. For molecules with many energetically close-lying orbitals, much less than a hundred strongly correlated electrons are already out of reach for classical calculation methods that could achieve the required accuracy. In this paper, Reiher et al. using as an example the open problem of biological nitrogen fixation in nitrogenase, to show how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. They demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources (in this case as little as 111 qubits and 1.0x10^14 T gates)

Published in Blog

Qu&Co comments on this publication:

Various heuristic quantum optimization approaches have been suggested to solve combinatorial optimization problems, since the NP hardness of such problems makes heuristics the only viable option for many problems that need to be routinely solved in real-world applications. Usually the performance of the heuristic approach is examined; however, an equally important aspect is their application to current hardware. During the NISQ era, hardware with only a limited number of qubits is available and quantum error correction cannot be fully exploited. Furthermore, the error rates of the quantum circuitry is still high (10-3 – 10-4), which makes the computation inefficient.

This paper reviews existing approaches and develop new methods or improvements of many prominent approaches to combinatorial optimization on a small quantum computer by compiling circuits that implement them and optimizing their fault-tolerant realizations. Many of these methods are bottlenecked by calls to the same subroutines; thus, optimized circuits for those primitives should be of interest regardless of which heuristic is most effective in practice. In essentially all heuristic approaches to quantum optimization there is a primitive that is repeated many times in order to perform the optimization. Instead of investigating how many times those primitives must be repeated, one should focus on the best strategies for realizing those primitives within a fault-tolerant cost model. The goal of this paper is to estimate the performance of an early universal quantum computer for key steps of combinatorial optimization.

These bottlenecks are compiled for several families of optimization problems and it is reported for how long and for what size systems one can perform these heuristics in the surface code given a range of resource budgets. The obtained results discourage the notion that any quantum optimization heuristic realizing only a quadratic speedup will achieve an advantage over classical algorithms on modest superconducting qubit surface code processors without significant improvements in the implementation of the surface code. The essential reason for this is the substantial constant factor slowdown between error-corrected quantum computation and classical computation. Based on these results, we will either need quantum optimization algorithms that afford speedups which are much better than quadratic, or we will need significant improvements in the way that we realize error-correction.

For example, the authors calculate that to implement problems between N = 64 and N = 1024, hundreds of thousands of physical qubits are required when physical gate error rates are on the order of 10-4 and sometimes over a million are required for physical gate error rates on the order of 10-3. Even more concerning is that the number of updates achieved in a day (given realistic cycle times for the error correcting codes) is relatively low, on the order of about ten thousand updates for the smallest instances considered of the cheapest cost functions. With such overheads, these heuristics would need to yield dramatically better improvements in the objective function per step than classical optimization heuristics. Therefore, barring significant advances in the implementation of the surface code (e.g., much faster state distillation), quantum optimization algorithms offering only a quadratic speedup are unlikely to produce any quantum advantage on the first few generations of superconducting qubit surface code processors.

Published in Blog

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close