Displaying items by tag: FTQC algorithms

In recent years, there have been significant developments in the field of quantum algorithms, especially in applications involving quantum chemistry and representations of molecular Hamiltonians. However, existing methods for simulating quantum chemistry (especially those leveraging simple basis functions like plane waves) are not feasible upon scaling to the continuum limit. The reason being that the majority of quantum computing algorithms for chemistry are based on second quantized simulations where the required number of qubits scales linearly with the number of spin-orbital basis functions.

To overcome this limitation, first quantized simulations of chemistry have been proposed where the idea is to track the basis state each particle is in, instead of storing the occupancy of each basis state. This is comparatively advantageous as the number of qubits needed to represent the state, scale logarithmically with the number of basis functions. Also, such simulations are highly adaptive in cases where the entanglement between the electronic and nuclear subsystems is non-negligible.

In this work, the authors analyze the finite resources required to implement two first quantized quantum algorithms for chemistry; block encodings for the qubitization and interaction picture frameworks. The objective is to compile and optimize these algorithms using a plane wave basis within an error-correctable gate set as well as developing new techniques to reduce the complexity. For qubitization of the Hamiltonian, control registers are used to select the momenta as well as the individual bits to multiply, which significantly decreases the multiplication cost. The results show that the qubitization algorithms requires much less surface code spacetime volume for simulating millions of plane waves as compared to the best second quantized algorithms require for simulating hundreds of Gaussian orbitals. In case of interaction picture based, the cost of computing the total kinetic energy is reduced by computing the sum of squares of momenta at the beginning.

The work shows that the total cost associated with the state preparation cost is reduced by a factor of 3, when one assumes that the state preparation is only needed for the kinetic energy operator, rather than using the standard amplitude amplification for the total energy. The number of bits needed for the discretization of the time intervals is also reduced by a factor of two as indicated by prior well-established methods. The main complexity is therefore focused on selecting the momenta registers. Unlike the previous approaches, only the dynamic degrees of freedom for electrons that participate in reactivity are employed by the algorithm, hence reducing encoding complexity. This approach is particularly more advantageous for a high number of electrons for the qubitization and despite the improved scaling of the interaction picture based method, the qubitization based method proves to be comparatively more practical. Also, the numerical experimentation reveals that these approaches require significantly fewer resources to reach comparable accuracy as compared to second quantized methods. Another interesting finding is that the first quantized approaches developed here, may give lower Toffoli complexities than previous work for realistic simulations of both material Hamiltonians and non-periodic molecules, suggesting more fault tolerant viability than second quantized methods.

This work provides the first explicit circuits and constant factors for simulation of any first quantized quantum algorithm, which can be a promising direction for simulating realistic materials Hamiltonians within quantum error-correction. But perhaps more impressively, the authors have also improved the constant factors; the results demonstrate reduction in circuit complexity by about a thousandfold as compared to naive implementations for modest sized systems. It also provides insights on the resources required to simulate various molecules and materials and gives the first impression of the first quantization-based algorithm for preparation of the eigenstates for phase estimation with required overlap with another eigenstate. This suggests many potential advantages in contrast to the second quantization-based algorithms, as the case in the continuum limit.

In addition to the progress made by the current work, many potential further improvements on the approach have already been proposed by the authors as well, such as modifying the algorithm to encode pseudopotential specific to the problem - relating it to bonds and chemical reactivity. Furthermore, using a non-orthogonal reciprocal lattice and efficient convergence in the thermodynamic limit might be a good direction for future research. The presented algorithms could also be adapted to more natively suit systems of reduced periodicity even thought the underlying representation is based on plane waves. The authors acknowledge relatively little is known about state-preparation in first-quantization as compared to second-quantization. Future work may investigate whether straightforward translations form the latter body of work to the current will be sufficiently efficient. Although this work focuses on preparing eigenstates for energy estimation, it can also be extended for simulation of (non-)Born-Oppenheimer dynamics.

The paper offers an interesting insight and significant progress in designing first-quantization picture quantum chemistry simulations on fault-tolerant hardware. Several strong advantages as compared to second-quantization methods are highlighted, and it will be interesting to see how this field evolves further.
Published in Blog

Qu&Co comments on this publication:

Most near-term quantum-computational chemistry experiments have so-far been implemented by applying the Variational Quantum Eigensolver (VQE) classical-quantum hybrid algorithm as an alternative to Quantum Phase Estimation (QPE). This is due to the fact that QPE requires many orders of magnitude more quantum gates than is feasible with typical coherence times of current and near-term quantum-processors. As an alternative, in this paper, Paesani et al. report experimental results of a recently proposed adaptive Bayesian approach to quantum phase estimation and use it to simulate molecular energies on a Silicon quantum photonic device. The approach is verified to be well suited for NISQ quantum-processors by investigating its superior robustness to noise and decoherence compared to the iterative phase estimation algorithm. There results shows a promising route to unlock the power of QPE much sooner than previously believed possible.

Published in Blog

Qu&Co comments on this publication:

At its core, the detailed understanding and prediction of complex chemical reaction mechanisms, requires highly accurate electronic structure methods. For molecules with many energetically close-lying orbitals, much less than a hundred strongly correlated electrons are already out of reach for classical calculation methods that could achieve the required accuracy. In this paper, Reiher et al. using as an example the open problem of biological nitrogen fixation in nitrogenase, to show how quantum computers can augment classical computer simulations used to probe these reaction mechanisms, to significantly increase their accuracy and enable hitherto intractable simulations. They demonstrate that quantum computers will be able to tackle important problems in chemistry without requiring exorbitant resources (in this case as little as 111 qubits and 1.0x10^14 T gates)

Published in Blog

Qu&Co comments on this publication:

Various heuristic quantum optimization approaches have been suggested to solve combinatorial optimization problems, since the NP hardness of such problems makes heuristics the only viable option for many problems that need to be routinely solved in real-world applications. Usually the performance of the heuristic approach is examined; however, an equally important aspect is their application to current hardware. During the NISQ era, hardware with only a limited number of qubits is available and quantum error correction cannot be fully exploited. Furthermore, the error rates of the quantum circuitry is still high (10-3 – 10-4), which makes the computation inefficient.

This paper reviews existing approaches and develop new methods or improvements of many prominent approaches to combinatorial optimization on a small quantum computer by compiling circuits that implement them and optimizing their fault-tolerant realizations. Many of these methods are bottlenecked by calls to the same subroutines; thus, optimized circuits for those primitives should be of interest regardless of which heuristic is most effective in practice. In essentially all heuristic approaches to quantum optimization there is a primitive that is repeated many times in order to perform the optimization. Instead of investigating how many times those primitives must be repeated, one should focus on the best strategies for realizing those primitives within a fault-tolerant cost model. The goal of this paper is to estimate the performance of an early universal quantum computer for key steps of combinatorial optimization.

These bottlenecks are compiled for several families of optimization problems and it is reported for how long and for what size systems one can perform these heuristics in the surface code given a range of resource budgets. The obtained results discourage the notion that any quantum optimization heuristic realizing only a quadratic speedup will achieve an advantage over classical algorithms on modest superconducting qubit surface code processors without significant improvements in the implementation of the surface code. The essential reason for this is the substantial constant factor slowdown between error-corrected quantum computation and classical computation. Based on these results, we will either need quantum optimization algorithms that afford speedups which are much better than quadratic, or we will need significant improvements in the way that we realize error-correction.

For example, the authors calculate that to implement problems between N = 64 and N = 1024, hundreds of thousands of physical qubits are required when physical gate error rates are on the order of 10-4 and sometimes over a million are required for physical gate error rates on the order of 10-3. Even more concerning is that the number of updates achieved in a day (given realistic cycle times for the error correcting codes) is relatively low, on the order of about ten thousand updates for the smallest instances considered of the cheapest cost functions. With such overheads, these heuristics would need to yield dramatically better improvements in the objective function per step than classical optimization heuristics. Therefore, barring significant advances in the implementation of the surface code (e.g., much faster state distillation), quantum optimization algorithms offering only a quadratic speedup are unlikely to produce any quantum advantage on the first few generations of superconducting qubit surface code processors.

Published in Blog

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close