Qu&Co in collaboration with our academic advisor Oleksandr Kyriienko at the University of Exeter has developed a proprietary quantum algorithm which promises a generic and efficient way to solve nonlinear differential equations. The algorithm is compatible with near-term quantum-processors, with promising extensions for fault-tolerant implementation. Using a quantum feature map encoding, we define functions as expectation values of parametrized quantum circuits. We use automatic differentiation to represent function derivatives in an analytical form as differentiable quantum circuits (DQCs), thus avoiding inaccurate finite difference procedures for calculating gradients. We describe a hybrid quantum-classical workflow where DQCs are trained to satisfy differential equations and specified boundary conditions. As a particular example setting, we show how this approach can implement a spectral method for solving differential equations in a high-dimensional feature space. From a technical perspective, we design a Chebyshev quantum feature map that offers a powerful basis set of fitting polynomials and possesses rich expressivity. We simulate the algorithm to solve an instance of Navier-Stokes equations, and compute density, temperature and velocity profiles for the fluid flow in a convergent-divergent nozzle.
Enhancing machine learning applications with quantum computing is currently being massively investigated, since it might prove quantum advantage during the NISQ era. Enhancement is typically performed through improvement of the training process of existing classical models or enhancing inference in graphical models. Another way is through the construction of quantum models that generate correlations between variables that are inefficient to represent through classical computation (e.g. quantum neural networks). If the model leverages a quantum circuit that is hard to sample results from classically, then there is potential for a quantum advantage.

The main message of this work is to show quantitatively that when classical models are provided with some training data, even if those were obtained from a quantum model that cannot be computed classically, they can reach similar performance as the quantum model. The authors provide rigorous prediction error bounds for training classical and quantum ML methods based on kernel functions in order to learn quantum mechanical models. It has been proven that kernel methods provide provable guarantees, but are also very flexible in the functions they can learn.

The main contribution of the author’s work lies in the development of quantum kernels and the implementation of a guidebook that generates ML problems which give a large separation between quantum and classical models. The use of prediction error bounds quantifies the separation between prediction errors of quantum and classical ML models for a fixed amount of training data. Typically, the comparison is done based on the geometric difference defined by the closest efficient classical ML model, however, in practice, one should consider the geometric difference with respect to a suite of optimized classical ML models. When the geometric difference is small then a classical ML method is guaranteed to provide similar or better performance in prediction on the data set, independent of the function values or labels. When the geometry differs greatly, there exists a data set that exhibits large prediction advantage using the quantum ML model.

The small geometric difference is a consequence of the exponentially large Hilbert space employed by existing quantum models, where all inputs are too far apart. To circumvent the setback, the authors propose an improvement which enlarges the geometric difference by projecting quantum states embedded from classical data back to approximate classical representation. This proposal allows for the construction of a data set that demonstrates large prediction advantage over common classical ML models in numerical experiments up to 30 qubits. In that way, one can use a small quantum computer to generate efficiently verifiable ML problems that could be challenging for classical ML models.
There is currently a big effort in proving the existence of quantum advantage in NISQ devices, with typical applications including machine learning due to its wide applicability. However, an argument has been raised stipulating whether the advantage being proven in such applications arises from the nature of quantum computing or the way the data is provided. Actually, there are recent works which have shown that, when classical algorithms have an analogous sampling access to data, then they can reach similar performance as their quantum counterparts. Therefore, a fair study concerning quantum advantage in such a setting should only assume classical access to data for both the classical and quantum algorithm.

Most quantum algorithms being explored at the moment are of the variational nature, where a circuit is selected from a parametrized circuit family via classical optimization. However, such algorithms are heuristic and there is no formal evidence that they exhibit a genuine quantum advantage. In their work, the authors describe an exponential quantum speedup via the use of a quantum-enhanced feature space, where each data point is mapped non-linearly to a quantum state and then classified by a linear classifier in the high dimensional Hilbert space. The classification is achieved through a support vector machine (SVM), whose kernel matrix is obtained by measuring the pairwise inner product of the feature states, known as quantum kernel estimation (QKE). This kernel matrix is then given to a classical optimizer that efficiently finds the linear classifier that optimally separates the training data in feature space by running a convex quadratic program.

The advantage of such a quantum learner comes from the ability to recognize classically intractable complex patterns, using a feature map. This implementation combines the ideas of the generalization of soft margin classifiers and rigorously bounding the misclassification error of the SVM-QKE algorithm, thereby providing quantum advantage for the quantum classifier with guaranteed high accuracy. Furthermore, it is proven that this SVM classifier can learn the optimal separator in the exponentially large feature space, while also making it robust against additive sampling errors due to the error mitigation techniques that were used.

Finally, the classification problem showing the exponential quantum speed-up was based on the discrete logarithm problem (DLP) and it is proven that no efficient classical algorithm can achieve an accuracy that is inverse-polynomially better than random guessing, assuming the widely-believed classical hardness of DLP.

The results from this paper are relevant to a broader class of quantum machine learning algorithms exploiting feature maps and which aim to avoid the input-output problem. Although this particular problem is not practically motivated, these results set a positive theoretical foundation for the search of practical quantum advantage in machine learning.

Qu&Co comments on this publication:

In this paper, the meta-VQE algorithm is presented, which is an adaptation of the Variational Quantum Eigensolver (VQE). VQE is a variational algorithm that was originally proposed for finding the ground state energy of a given Hamiltonian by variationally minimizing its expectation value with a parametrized quantum circuit. The cost function of VQE is the expected value of the model Hamiltonian. The variational principle states that this value is an upper bound of the ground state energy, so everything reduces to minimize this value by fine-tuning the parameters of the circuit.

The meta-VQE algorithm is inspired by quantum machine learning algorithms (QML) and is able to learn the ground state energy profile of a parametrized Hamiltonian. First, this circuit is trained with a set of Hamiltonian parameters, which are encoded in the gates of an encoding unitary. By designing a cost function with the expected values of all these Hamiltonian training points, the algorithm extracts the optimal values of the variational parameters. Then, one can compute the energy for other Hamiltonian values by just running the meta-VQE circuit with the parameters obtained in the minimization. In addition, one can also use the result of a meta-VQE training as a starting point of a standard VQE algorithm, the opt-meta-VQE, instead of random initialization.

This technique can make the algorithm converge to the correct solution, while previous variants of VQE often suffer from convergence issues due to the exponential increase of the Hilbert space paired with a random initialized ansatz, typically resulting in a limited probability of finding the ground state as the quantum system increases. The characteristic trait of the meta-VQE is that it can be used to first explore the ground state energies of Hamiltonian parameter space with only a few training points and then use the result as an initial state for a precise VQE, resulting in high precision. Furthermore, meta-VQE can easily be adapted to be a part of other VQE strategies to increase performance. Moreover, the meta-VQE is able to capture global correlation with a few training points alleviating the refined optimization of the individual points in a later step. The authors conclude that the meta-VQE can find the general energy shape but not provide an accurate value, in contrast to standard VQE. However, the opt-meta-VQE proves valuable, achieving better results than standard VQE with random initialization.

A prospective application of quantum computing is solving quantum chemistry problems, however, obtaining exact solutions is difficult due to the lack of general method of obtaining such solutions. Typically the solution lies in finding the ground state energy, even though the energy is not descriptive enough to fully characterize all desired properties of a system. In order to find such properties, many measurements of the wavefunction are required. These measurements are expensive, because the wavefunction cannot be copied and must often be re-prepared before a second measurement is performed. Finding the full wavefunction would require exponentially many measurements, so one option would be to encode many solutions into one measurement by using a machine learned (ML) model. Training of the ML model requires finding exact quantities at several different external potentials. Besides that, one can use density functional theory (DFT), to replace the wavefunction with the one-body density, n(r), which has fewer variables. When DFT is used, instead of the Hamiltonian, the universal functional, F[n], must be found. The quantities required for the classical user to find self-consistent solutions are the exact functional (determining the energy) and the functional derivative. So, in addition to finding F[n], one also must find some other quantity such as the density, n(r), or the Kohn-Sham(KS) potential, vs(r). With these components, one can fully characterize a quantum ground state and solve for other measurable quantities.

The authors propose an algorithm that finds the ML model for F[n] on the quantum computer if a ground-state wavefunction is available. The algorithm leaves the wavefunction largely undisturbed so it can be used as the starting state for another system, greatly reducing the pre-factor required to solve other systems through a quantum counting algorithm to extract descriptive quantities such as the density. Most of this algorithm is kept entirely on the quantum computer to motivate future improvements for speed, but the counting algorithm does allow for information to be output classically. Moreover, the authors demonstrate that the exact Kohn-Sham potential can be solved in a faster way with a gradient evaluated on a cost function for the Kohn-Sham system.

The novelty of the proposed algorithm suggested in this work is the limitation of the number of measurements and re-preparations of the wavefunction especially in the case of time-dependent quantities. Since there is no algorithm for the general case that is exponentially better than the proposed algorithm, limiting the number of measurements and re-preparations of the wavefunction is as best as one can achieve. The proposed algorithm is a combination of several known algorithms including quantum phase estimation, quantum amplitude estimation, and quantum gradient methods that are iteratively used to train a machine learned model.

Qu&Co comments on this publication:

Various heuristic quantum optimization approaches have been suggested to solve combinatorial optimization problems, since the NP hardness of such problems makes heuristics the only viable option for many problems that need to be routinely solved in real-world applications. Usually the performance of the heuristic approach is examined; however, an equally important aspect is their application to current hardware. During the NISQ era, hardware with only a limited number of qubits is available and quantum error correction cannot be fully exploited. Furthermore, the error rates of the quantum circuitry is still high (10-3 – 10-4), which makes the computation inefficient.

This paper reviews existing approaches and develop new methods or improvements of many prominent approaches to combinatorial optimization on a small quantum computer by compiling circuits that implement them and optimizing their fault-tolerant realizations. Many of these methods are bottlenecked by calls to the same subroutines; thus, optimized circuits for those primitives should be of interest regardless of which heuristic is most effective in practice. In essentially all heuristic approaches to quantum optimization there is a primitive that is repeated many times in order to perform the optimization. Instead of investigating how many times those primitives must be repeated, one should focus on the best strategies for realizing those primitives within a fault-tolerant cost model. The goal of this paper is to estimate the performance of an early universal quantum computer for key steps of combinatorial optimization.

These bottlenecks are compiled for several families of optimization problems and it is reported for how long and for what size systems one can perform these heuristics in the surface code given a range of resource budgets. The obtained results discourage the notion that any quantum optimization heuristic realizing only a quadratic speedup will achieve an advantage over classical algorithms on modest superconducting qubit surface code processors without significant improvements in the implementation of the surface code. The essential reason for this is the substantial constant factor slowdown between error-corrected quantum computation and classical computation. Based on these results, we will either need quantum optimization algorithms that afford speedups which are much better than quadratic, or we will need significant improvements in the way that we realize error-correction.

For example, the authors calculate that to implement problems between N = 64 and N = 1024, hundreds of thousands of physical qubits are required when physical gate error rates are on the order of 10-4 and sometimes over a million are required for physical gate error rates on the order of 10-3. Even more concerning is that the number of updates achieved in a day (given realistic cycle times for the error correcting codes) is relatively low, on the order of about ten thousand updates for the smallest instances considered of the cheapest cost functions. With such overheads, these heuristics would need to yield dramatically better improvements in the objective function per step than classical optimization heuristics. Therefore, barring significant advances in the implementation of the surface code (e.g., much faster state distillation), quantum optimization algorithms offering only a quadratic speedup are unlikely to produce any quantum advantage on the first few generations of superconducting qubit surface code processors.

Qu&Co comments on this publication:

One hallmark problem in computational linear algebra is the binary linear least squares (BLLS), which is formally in the NP-Hard complexity class. Efficient classical methods for solving this problem exists with limited approximations to the solution. Quantum computing may solve these problems with a better approximation ratio and/or in a faster runtime scaling. So-far, this problem has only been considered on a quantum annealing by mapping it to a QUBO. In this paper, the problem is solved using a QAOA approach on the gate-based model of quantum computing. The performance is assessed both on a wavefunction simulator, shotnoise simulator and on the 5-qubit IBM cloud computing quantum device ibmq_london. As an outlook: BLLS may serve as a building block for other problems such as Non-negative Binary Matrix Factorization, or clubbed together for a fixed-point approximation of real variables. This paper was partially supervised by Vincent Elfving from Qu & Co.

Qu&Co comments on this publication:

Quantum computational supremacy (QCS) arguments have been provided to demonstrate that quantum computers will soon outperform classical computers in a variety of algorithms. However, in order to truly prove supremacy, several strict measures need to be taken: an appropriate algorithm must be selected, a quantum device to run the algorithm must be designed, the ability to verify the results of the calculations must be considered and a complexity theory that supports the claim that a classical computer would be unable to run such an algorithm should be provided. Quantum circuits running on quantum computing chips that are currently experimentally realized might still be able to be simulated in highly parallelized state-of-the-art classical supercomputers, therefore one can only make conjectures about QCS at the moment. Typically, classical simulation of certain families of quantum circuits require scaling that is worse than any polynomial in the size of the circuit, which prevents us from calculating exactly the number of qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers.

In this paper, three refined fine-grained conjectures about quantum supremacy are provided and it is calculated that 208 qubits and 500 gates for Instantaneous Quantum Polynomial-Time (IQP) circuits, 420 qubits and 500 constraints for Approximate Optimization Algorithm (QAOA) circuits and 98 photons and 500 optical elements are sufficient. Although noise in current quantum devices cannot be fully approximated, a lower bound on the runtime of all three algorithms for any multiplicative-error classical simulation is provided.

This paper provides a concrete estimation on the number of qubits required for three algorithms that have gained a lot of attention during the NISQ era. While the orginal work stems from 2018,  the number of qubits required has been recalculated in the newest version of this paper, which provides a good indication of how fidelity of quantum chips has been improved in the last two years, as well as the latest understanding in complexity and the on-going evolution in classical competition.

Qu&Co comments on this publication:

Quantum Approximate Optimization Algorithm (QAOA) is a variational quantum algorithm that has been heavily investigated due to its potential during the NISQ era. It is designed to find approximate solutions to combinatorial search problems and was first applied to the Max-Cut problem for d-regular graphs. The system is initially prepared in a product state and then p layers of unitaries U(C,γ) and U(B, β) are alternately applied; this can be seen as a Trotterized version of (non-adiabatic) quantum annealing with parametrized annealing pathway.

In these 2 papers by Farhi, Gamarnik & Gutmann posted on April 20 and May 18, the focus problem is to find a large independent set in a random graph of fixed average degree d for the problem of Maximum Independent Set (MIS) on random graphs. Generally, the performance of the QAOA can only improve with depth p, but it is shown that for MIS the algorithm will fail to pass a certain performance barrier if 2p is less than w*log(n)/log(d/ln(2)) for any w <1 with d big enough. The quantum algorithm consists of p unitaries that each respect the locality of the underlying graph. With a fixed average degree of d this means that each qubit typically has an influence sphere of roughly dp other qubits. For qubits further than 2p apart on the graph these influence spheres do not intersect and we can show that measurements of these qubits are uncorrelated, however if p is large enough that dp exceeds n our arguments do not apply and we have no indication that the QAOA will fail.

Overlap Gap Property states that for a given random graph the intersection of any two large independent sets is either big or small, there is no middle ground. Using OGP and the locality of the QAOA, it is shown that if p is less than a d-dependent constant times log n, the QAOA cannot do better than finding an independent set of size .854 times the optimal for d large. Because the logarithm is slowly growing, even at one million qubits we can only show that the algorithm is blocked if p is in single digits. At higher p the algorithm “sees” the whole graph and we have no indication that performance is limited.

The worst case performance circumstances can be easily created to exploit QAOA’s weaknesses. Through construction of operators C and B, QAOA is inherently a local quantum algorithm since when conjugating a single qubit operator produces an operator only involving that qubit and those connected to it on the graph, creating a shallow circuit. This can be exploited to construct examples where the QAOA’s performance is provably below optimal. For example, for Max-Cut when p is a small enough constant times log(n) we show that the approximation ratio is no better than ½ for large d, and for MIS the approximation ratio goes to 0 at large d.

This is an important result for the problem of MIS on random graphs that, although not directly generalizable to other problems, is still valuable for creating bounds for QAOA-applied problems. Knowing that on the average case for MIS, QAOA requires to see the whole graph, therefore require a large p, will change the way QAOA and its potential is viewed for certain problems, and these papers illustrate methods to quantify that.

Qu&Co comments on this publication:

At Qu&Co we always restrained ourselves from reacting to exaggerated claims about the short-term potential of quantum-computing. Rather we focused on our scientifically rigorous work to advance the field of quantum as we strongly believe in its long-term potential. However, we draw the line at quantum being pushed as a short-term solution for researchers working on COVID, like this WSJ article in which a quantum hardware manufacturer offers free hardware access to researchers studying COVID, stating ‘we have a fairly unique system that could add value’. Although this offer could be a misplaced April-fools joke, we want to stress that, although quantum has strong long-term potential, there is zero chance it will provide any short-term value for COVID research. Therefore, no serious researchers working on the current pandemic should be distracted by this offer. If you are determined to use novel methods to solve today’s combinatorial optimisation problems, perhaps try simulated annealing on a purpose-built classical processor. And of course, if your time horizon is >2 years and you want to work on collaborative quantum-algorithm R&D, without distracting scarce COVID R&D staff, we are here to help. Stay safe and focused!

Qu&Co comments on this publication:

Quantum computers are envisioned to have significantly higher computational capabilities compared to their classical counterparts, especially for optimization and machine learning problems that involve a large classical data set. However, existing quantum algorithms use the trivial methods of turning large classical datasets into either quantum oracles or quantum states which are so expensive that negate any possible quantum advantage. Such quantum algorithms focus at problems in which classical runtime scales rapidly with the input size, perhaps exponentially. To be able to achieve quantum speedup with algorithms like Grover search, a “quantum RAM” is proposed, which is a large classical memory that can be queried in superposition. Although quantum RAMs do not yet exist and creating one might encounter the same challenges that quantum computer hardware faces, it has the potential to provide significant speedup to applications like the k-means clustering, logistical regression, zero-sum games and boosting.

This paper introduces hybrid classical-quantum algorithms for problems involving a large classical data set X and a space of models Y such that a quantum computer has superposition access to Y but not X. Then a data reduction technique is used to construct a weighted subset of X called a coreset that yields approximately the same loss for each model. The coreset can be constructed either by a classical computer or by the combination of classical – quantum computer by utilization of quantum measurements.

The main message of this work is that in order to avoid losing quantum speedup for ‘big-data’ applications, techniques such as data reduction are required, so that the time for loading and storing the data set is limited. Also, non-fault tolerant quantum algorithms should be designed in a way that does not require an excessive amount of gates, so that the algorithm can run before qubits lose their coherence and invalidate the result. The goal of the paper is to draw attention to problems that arise from such actions like testing for quantum advantage when data reduction is used, explore general data reduction techniques and investigate new hybrid classical-quantum algorithms.

Qu&Co comments on this publication:

Quantum computers were initially proposed to efficiently simulate quantum mechanical systems with an exponential speedup compared to classical computers. We are currently in the noisy intermediate-scale quantum (NISQ) era, which means quantum chips still have a small number of qubits. This prohibits straightforward quantum simulations of realistic molecules and materials, whose description requires hundreds of atoms and thousands to millions of degrees of freedom to represent the electronic wavefunctions. One research direction which attempts to bypass this restriction is the development of hybrid quantum-classical methods where the quantum computation is restricted to a small portion of the system.

In this paper, a quantum embedding theory is proposed for the calculation of strongly-correlated electronic states of active regions, while the rest of the system is described with density functional theory (DFT). DFT (and its various approximations) has been extremely successful in predicting numerous properties of solids, liquids and molecules, and in providing key interpretations to a variety of experimental results, but is often inadequate to describe strongly-correlated electronic state. The novel theory proposed in this paper is built on DFT and does not require the explicit evaluation of virtual electronic states, thus making the method scalable to materials with thousands of electrons. Also, it includes the effect of exchange-correlation interactions of the environment on active regions, thus going beyond commonly adopted approximations in conventional DFT.

The proposed quantum embedding theory utilizes a classical and a quantum algorithm to solve the Hamiltonian that describes the problem and yields results in good agreement with existing experimental measurements and still-tractable computations on classical computing architectures. The theory is tested in various solid-state quantum information technologies, which exhibit strongly-correlated electronic states. In this way, the authors show how the quantum-classical hybrid approach incorporating DFT enables the study of large-scale material systems while adding the strongly-correlated dynamics analysis which the quantum simulation algorithm can provide.

Page 1 of 8

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close