Qu&Co comments on this publication:
In this paper, the meta-VQE algorithm is presented, which is an adaptation of the Variational Quantum Eigensolver (VQE). VQE is a variational algorithm that was originally proposed for finding the ground state energy of a given Hamiltonian by variationally minimizing its expectation value with a parametrized quantum circuit. The cost function of VQE is the expected value of the model Hamiltonian. The variational principle states that this value is an upper bound of the ground state energy, so everything reduces to minimize this value by fine-tuning the parameters of the circuit.
The meta-VQE algorithm is inspired by quantum machine learning algorithms (QML) and is able to learn the ground state energy profile of a parametrized Hamiltonian. First, this circuit is trained with a set of Hamiltonian parameters, which are encoded in the gates of an encoding unitary. By designing a cost function with the expected values of all these Hamiltonian training points, the algorithm extracts the optimal values of the variational parameters. Then, one can compute the energy for other Hamiltonian values by just running the meta-VQE circuit with the parameters obtained in the minimization. In addition, one can also use the result of a meta-VQE training as a starting point of a standard VQE algorithm, the opt-meta-VQE, instead of random initialization.
This technique can make the algorithm converge to the correct solution, while previous variants of VQE often suffer from convergence issues due to the exponential increase of the Hilbert space paired with a random initialized ansatz, typically resulting in a limited probability of finding the ground state as the quantum system increases. The characteristic trait of the meta-VQE is that it can be used to first explore the ground state energies of Hamiltonian parameter space with only a few training points and then use the result as an initial state for a precise VQE, resulting in high precision. Furthermore, meta-VQE can easily be adapted to be a part of other VQE strategies to increase performance. Moreover, the meta-VQE is able to capture global correlation with a few training points alleviating the refined optimization of the individual points in a later step. The authors conclude that the meta-VQE can find the general energy shape but not provide an accurate value, in contrast to standard VQE. However, the opt-meta-VQE proves valuable, achieving better results than standard VQE with random initialization.
Qu&Co comments on this publication:
Various heuristic quantum optimization approaches have been suggested to solve combinatorial optimization problems, since the NP hardness of such problems makes heuristics the only viable option for many problems that need to be routinely solved in real-world applications. Usually the performance of the heuristic approach is examined; however, an equally important aspect is their application to current hardware. During the NISQ era, hardware with only a limited number of qubits is available and quantum error correction cannot be fully exploited. Furthermore, the error rates of the quantum circuitry is still high (10-3 – 10-4), which makes the computation inefficient.
This paper reviews existing approaches and develop new methods or improvements of many prominent approaches to combinatorial optimization on a small quantum computer by compiling circuits that implement them and optimizing their fault-tolerant realizations. Many of these methods are bottlenecked by calls to the same subroutines; thus, optimized circuits for those primitives should be of interest regardless of which heuristic is most effective in practice. In essentially all heuristic approaches to quantum optimization there is a primitive that is repeated many times in order to perform the optimization. Instead of investigating how many times those primitives must be repeated, one should focus on the best strategies for realizing those primitives within a fault-tolerant cost model. The goal of this paper is to estimate the performance of an early universal quantum computer for key steps of combinatorial optimization.
These bottlenecks are compiled for several families of optimization problems and it is reported for how long and for what size systems one can perform these heuristics in the surface code given a range of resource budgets. The obtained results discourage the notion that any quantum optimization heuristic realizing only a quadratic speedup will achieve an advantage over classical algorithms on modest superconducting qubit surface code processors without significant improvements in the implementation of the surface code. The essential reason for this is the substantial constant factor slowdown between error-corrected quantum computation and classical computation. Based on these results, we will either need quantum optimization algorithms that afford speedups which are much better than quadratic, or we will need significant improvements in the way that we realize error-correction.
For example, the authors calculate that to implement problems between N = 64 and N = 1024, hundreds of thousands of physical qubits are required when physical gate error rates are on the order of 10-4 and sometimes over a million are required for physical gate error rates on the order of 10-3. Even more concerning is that the number of updates achieved in a day (given realistic cycle times for the error correcting codes) is relatively low, on the order of about ten thousand updates for the smallest instances considered of the cheapest cost functions. With such overheads, these heuristics would need to yield dramatically better improvements in the objective function per step than classical optimization heuristics. Therefore, barring significant advances in the implementation of the surface code (e.g., much faster state distillation), quantum optimization algorithms offering only a quadratic speedup are unlikely to produce any quantum advantage on the first few generations of superconducting qubit surface code processors.
Qu&Co comments on this publication:
One hallmark problem in computational linear algebra is the binary linear least squares (BLLS), which is formally in the NP-Hard complexity class. Efficient classical methods for solving this problem exists with limited approximations to the solution. Quantum computing may solve these problems with a better approximation ratio and/or in a faster runtime scaling. So-far, this problem has only been considered on a quantum annealing by mapping it to a QUBO. In this paper, the problem is solved using a QAOA approach on the gate-based model of quantum computing. The performance is assessed both on a wavefunction simulator, shotnoise simulator and on the 5-qubit IBM cloud computing quantum device ibmq_london. As an outlook: BLLS may serve as a building block for other problems such as Non-negative Binary Matrix Factorization, or clubbed together for a fixed-point approximation of real variables. This paper was partially supervised by Vincent Elfving from Qu & Co.
Qu&Co comments on this publication:
Quantum computational supremacy (QCS) arguments have been provided to demonstrate that quantum computers will soon outperform classical computers in a variety of algorithms. However, in order to truly prove supremacy, several strict measures need to be taken: an appropriate algorithm must be selected, a quantum device to run the algorithm must be designed, the ability to verify the results of the calculations must be considered and a complexity theory that supports the claim that a classical computer would be unable to run such an algorithm should be provided. Quantum circuits running on quantum computing chips that are currently experimentally realized might still be able to be simulated in highly parallelized state-of-the-art classical supercomputers, therefore one can only make conjectures about QCS at the moment. Typically, classical simulation of certain families of quantum circuits require scaling that is worse than any polynomial in the size of the circuit, which prevents us from calculating exactly the number of qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers.
In this paper, three refined fine-grained conjectures about quantum supremacy are provided and it is calculated that 208 qubits and 500 gates for Instantaneous Quantum Polynomial-Time (IQP) circuits, 420 qubits and 500 constraints for Approximate Optimization Algorithm (QAOA) circuits and 98 photons and 500 optical elements are sufficient. Although noise in current quantum devices cannot be fully approximated, a lower bound on the runtime of all three algorithms for any multiplicative-error classical simulation is provided.
This paper provides a concrete estimation on the number of qubits required for three algorithms that have gained a lot of attention during the NISQ era. While the orginal work stems from 2018, the number of qubits required has been recalculated in the newest version of this paper, which provides a good indication of how fidelity of quantum chips has been improved in the last two years, as well as the latest understanding in complexity and the on-going evolution in classical competition.