Displaying items by tag: NISQ algorithms

For the Noisy Intermediate Scale Quantum (NISQ) era, in the absence of large-scale quantum error correction, the number of gates that can be applied while maintaining computational coherence is at present strongly limited by hardware noise and decoherence. In an attempt to alleviate some of the detrimental effects, current generations of quantum algorithms often rely on a hybrid classical-quantum approach. Such approaches consider a trial quantum state (ansatz state) with a tractable number of parameters and relatively short circuit depth. These parameters are then optimized in order to approximate a target state as accurately as possible. In most of such applications shown hitherto, the target state was variationally optimized to represent a lowest-energy eigenstate (groundstate) of some quantum Hamiltonian.

However, one can also envision simulating unitary time evolution (or ‘dynamics’) with such variational algorithms. The authors of today’s paper first reference the Time-Dependent Variational Algorithm (TDVA) which encodes the state into a variational circuit and iteratively updates the parameters by solving the corresponding equation of motion. However, a significant drawback of that existing algorithm is that it suffers from an expensive quadratic cost in the total number of variational parameters.

To tackle this problem, the authors in this work introduce a novel hybrid algorithm to simulate the real-time evolution of quantum systems using parameterized quantum circuits. They propose a new method called "projected - Variational Quantum Dynamics" (p-VQD) which realizes an iterative, global projection of the exact time evolution onto the parameterized manifold. In the small time-step limit, this approach is equivalent to the McLachlan’s variational principle and uses it to optimize all parameters at once. This algorithm is shown to overcome the drawbacks of existing approaches as it is both global – it optimizes all parameters at once – and efficient – it scales linearly with the number of parameters. Moreover, it does not require auxiliary (ancilla) qubits and the depth of the circuit is constant throughout all the simulation. They use circuit differentiation to compute gradients analytically and use them for gradient descent optimization.

This global approach potentially extends the scope of existing efficient variational methods, that instead typically rely on the iterative optimization of a restricted subset of variational parameters. The authors claim that this approach can be particularly advantageous over existing global optimization algorithms based on the time-dependent variational principle.

They have shown that the algorithm is asymptotically more hardware efficient than the standard variational algorithm while retaining a higher accuracy. Currently a drawback of this method is that the circuit constructed on the quantum device is approximately twice as deep as the ansatz used to represent the system. However, by suitably controlling the number of two-qubit gates in the particular ansatz of choice, the authors comment that p-VQD is already implementable to simulate small quantum systems on available devices.

One possible application of the approach used in this work is to study the dynamical properties of two-dimensional interacting systems which is a notoriously difficult problem for classical computation. Similar to all other variational algorithms, the choice of the right parametrization is fundamental for the algorithm to succeed. In this sense, having an efficient quantum algorithm to perform variational time evolution is essential to compare to classical results obtained with variational states based on tensor or neural networks.
Published in Blog
Tagged under
In this edition of Active Quantum Research Areas (AQRAs), we discuss recent research on barren plateaus in variational quantum algorithms.

Parametrized quantum circuits (PQCs) are employed in Quantum Neural Networks (QNNs) and Variational Quantum Algorithms (VQAs), which are typical researched algorithms that may allow for near-term quantum advantage. QNNs have been recently proposed as generalizations of classical neural networks that can potentially analyse quantum data more efficiently. VQAs have been proposed as a more resilient to noise circuit for the NISQ era.

Both algorithms may rely on gradient based optimization to iteratively minimize a cost function evaluated by measuring output(s) of a quantum processor. Training PQCs involves a hybrid quantum-classical optimization loop. Typically, the problem is encoded in a cost (or loss) function that is ideally efficient to evaluate on a quantum computer but computationally expensive for a classical one. While the quantum computer evaluates the cost, a classical optimizer updates some (usually continuous) parameters associated with the quantum operations. PQCs with fixed gate structure are often referred to as a variational ansatze.

A `barren plateau' is the phenomenon of exponentially vanishing gradients in sufficiently expressive parametrized quantum circuits and it was first coined in the context of quantum circuits in [McClean2018]. It has been established that the onset of a barren plateau regime depends on the cost function, although the particular behaviour has been demonstrated only for certain classes of cost functions. Barren plateau landscapes correspond to gradients that vanish exponentially in the number of qubits. Therefore, an exponentially large precision is needed to navigate through the landscape. Such landscapes have been demonstrated for variational quantum algorithms and quantum neural networks with either deep circuits or global cost functions. Even if one manages to avoid these barren plateaus, there is an additional difficulty of optimizing in the presence of the hardware noise that defines NISQ devices, which is expected to modify the cost landscape.

The classical optimization step, integral to these algorithms, can be implemented using either gradient-free or gradient-based methods. At first glance, the use of the former seems more effective as the evaluation of the cost function is subject to imperfection(s). However, further research developed methods of evaluating gradients analytically (i.e. without relying on finite differences), and the access to gradients was proven to speed up the convergence to local minima.

In the absence of a barren plateau, the determination of a minimizing direction in the cost function landscape does not require an exponentially large precision, meaning that one can always navigate through the landscape by measuring expectation values with a precision that grows at most polynomially with the system size. Polynomial overhead is the standard goal for quantum algorithms, which aim to achieve a speedup over classical algorithms that often scale exponentially. Hence, the absence or existence of a barren plateau can determine whether or not a quantum speedup is achievable.

Recently, particularly in the past month of November 2020, many interesting papers appeared on the arXiv describing research on these barren plateaus, which we briefly summarize here.

In [Pesah2020], the authors analyse gradient scaling for the parameters of their proposed Quantum Convolutional Neural Network (QCNN). The main result of this research is the proposition of a novel method for analyzing the scaling of the variance and lower bounding the variance for the QCNN architecture.

In [Zhang2020], the authors propose a tree tensor architecture for QNNs. The main result is the proof that tree tensor QNNs have gradients that vanish polynomially with the qubit number, irrespective of which encoding methods are employed.

In [Fontana2020], the authors examine ways to exploit the symmetries and degeneracies that exist in PQCs and investigate how hardware noise affect such symmetries. The main result is that they find an exponentially large symmetry in PQCs, yielding an exponentially large degeneracy of the minima in the cost landscape. Also, they show that noise (specifically non-unital noise) can break these symmetries and lift the degeneracy of minima, making many of them local minima instead of global minima. The main contribution is an novel optimization method, which exploits underlying symmetries in PQCs.

In [Uvarov2020], the authors derive a lower bound on the variance of the gradient, which depends mainly on the width of the circuit causal cone of each term in the Pauli decomposition of the cost function. The main contribution is that this result further clarifies the conditions under which barren plateaus can occur. Given a Hamiltonian, they are able to estimate its susceptibility to the barren plateaus, therefore one can pre-process Hamiltonians in order to make the optimization more viable.

In [Arrasmith2020], the authors answer the question of whether only gradient based optimizers suffer from the vanishing gradient phenomenon. The main result is that gradient-free optimizers do not solve the barren plateau problem and prove that cost function differences are exponentially suppressed in a barren plateau. Hence, without exponential precision, gradient-free optimizers will not make progress in the optimization.

In [Anand2020], the authors explore Natural Evolutionary Strategies (NES) for the optimization of randomly-initialized PQCs in the region of vanishing gradients. The main result is that using the NES gradient estimator the exponential decrease in variance can be alleviated. They compare their two approaches, namely the exponential and separable NES, against standard gradient descent and show that optimization of randomly initialized PQCs can be performed with significantly less circuit evaluations using NES, while achieving comparable accuracy to gradient based methods. They also show that NES methods can be used to amplify gradients and improve parameter initialization for gradient-based approaches.

It is clear from the ongoing research that the vanishing gradient problem is substantial when working with PQCs, which in turn seems to be a crucial aspect for proving quantum advantage in the NISQ era. Although different approaches have been suggested to overcome the limitations arising from barren plateaus, there is not one simple way to do it. Most of the strategies discussed, focus on the near term heuristic solutions that can be obtained from PQCs, with the main goal being a polynomial instead of an exponential overhead with the number of qubits for the precision of the cost function, which is a way one can prove quantum advantage. This is done by employing new optimization strategies, optimizers, re-design of the Hamiltonian/ansatz, exploitation of symmetries and many other ways. Researchers in this field will likely keep inventing new ways to avoid or mitigate barren plateaus for the foreseeable future, and these recent developments have ushered in a new era of understanding of this domain.

References:
[Anand2020] Natural Evolutionary Strategies for Variational Quantum Computation, by Abhinav Anand, Matthias Degroote and Alán Aspuru-Guzik, arXiv:2012.00101
[Arrasmith2020] Effect of barren plateaus on gradient-free optimization, by Andrew Arrasmith, M. Cerezo, Piotr Czarnik, Lukasz Cincio, Patrick J. Coles, arXiv:2011.12245
[Fontanta2020] Optimizing parametrized quantum circuits via noise-induced breaking of symmetries, Enrico Fontana, M. Cerezo, Andrew Arrasmith, Ivan Rungger, Patrick J. Coles, arXiv:2011.08763
[Pesah2020] Absence of Barren Plateaus in Quantum Convolutional Neural Networks, by Arthur Pesah, M. Cerezo, Samson Wang, Tyler Volkoff, Andrew T. Sornborger and Patrick J. Coles, arXiv:2011.02966
[McClean2018] Barren plateaus in quantum neural network training landscapes, by McClean, J.R., Boixo, S., Smelyanskiy, V.N., R. Babbush and H. Neven, Nat Commun 9, 4812 (2018)
[Uvarov2020] On barren plateaus and cost function locality in variational quantum algorithms, by Alexey Uvarov and Jacob Biamonte, arXiv:2011.10530
[Zhang2020] Toward Trainability of Quantum Neural Networks, by Kaining Zhang, Min-Hsiu Hsieh, Liu Liu and Dacheng Tao, arXiv:2011.06258
Published in Blog
Tagged under

Qu&Co comments on this publication:

In this paper, the meta-VQE algorithm is presented, which is an adaptation of the Variational Quantum Eigensolver (VQE). VQE is a variational algorithm that was originally proposed for finding the ground state energy of a given Hamiltonian by variationally minimizing its expectation value with a parametrized quantum circuit. The cost function of VQE is the expected value of the model Hamiltonian. The variational principle states that this value is an upper bound of the ground state energy, so everything reduces to minimize this value by fine-tuning the parameters of the circuit.

The meta-VQE algorithm is inspired by quantum machine learning algorithms (QML) and is able to learn the ground state energy profile of a parametrized Hamiltonian. First, this circuit is trained with a set of Hamiltonian parameters, which are encoded in the gates of an encoding unitary. By designing a cost function with the expected values of all these Hamiltonian training points, the algorithm extracts the optimal values of the variational parameters. Then, one can compute the energy for other Hamiltonian values by just running the meta-VQE circuit with the parameters obtained in the minimization. In addition, one can also use the result of a meta-VQE training as a starting point of a standard VQE algorithm, the opt-meta-VQE, instead of random initialization.

This technique can make the algorithm converge to the correct solution, while previous variants of VQE often suffer from convergence issues due to the exponential increase of the Hilbert space paired with a random initialized ansatz, typically resulting in a limited probability of finding the ground state as the quantum system increases. The characteristic trait of the meta-VQE is that it can be used to first explore the ground state energies of Hamiltonian parameter space with only a few training points and then use the result as an initial state for a precise VQE, resulting in high precision. Furthermore, meta-VQE can easily be adapted to be a part of other VQE strategies to increase performance. Moreover, the meta-VQE is able to capture global correlation with a few training points alleviating the refined optimization of the individual points in a later step. The authors conclude that the meta-VQE can find the general energy shape but not provide an accurate value, in contrast to standard VQE. However, the opt-meta-VQE proves valuable, achieving better results than standard VQE with random initialization.

Published in Blog
Tagged under

Qu&Co comments on this publication:

Quantum computational supremacy (QCS) arguments have been provided to demonstrate that quantum computers will soon outperform classical computers in a variety of algorithms. However, in order to truly prove supremacy, several strict measures need to be taken: an appropriate algorithm must be selected, a quantum device to run the algorithm must be designed, the ability to verify the results of the calculations must be considered and a complexity theory that supports the claim that a classical computer would be unable to run such an algorithm should be provided. Quantum circuits running on quantum computing chips that are currently experimentally realized might still be able to be simulated in highly parallelized state-of-the-art classical supercomputers, therefore one can only make conjectures about QCS at the moment. Typically, classical simulation of certain families of quantum circuits require scaling that is worse than any polynomial in the size of the circuit, which prevents us from calculating exactly the number of qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers.

In this paper, three refined fine-grained conjectures about quantum supremacy are provided and it is calculated that 208 qubits and 500 gates for Instantaneous Quantum Polynomial-Time (IQP) circuits, 420 qubits and 500 constraints for Approximate Optimization Algorithm (QAOA) circuits and 98 photons and 500 optical elements are sufficient. Although noise in current quantum devices cannot be fully approximated, a lower bound on the runtime of all three algorithms for any multiplicative-error classical simulation is provided.

This paper provides a concrete estimation on the number of qubits required for three algorithms that have gained a lot of attention during the NISQ era. While the orginal work stems from 2018,  the number of qubits required has been recalculated in the newest version of this paper, which provides a good indication of how fidelity of quantum chips has been improved in the last two years, as well as the latest understanding in complexity and the on-going evolution in classical competition.

Published in Blog

Qu&Co comments on this publication:

In this article, John Preskill provides his view on the near-term (5-10 years ahead) societal and commercial impact of quantum-computers. Specifically, the author focuses on what he calls Noisy Intermediate-Scale Quantum (NISQ) technology: quantum computers with 50-100 qubits for which noise in quantum gates will limit the size of quantum circuits that can be executed reliably. Such NISQ devices may be able to perform tasks which surpass the capabilities of today’s classical digital computers and will be useful tools for exploring e.g. many-body quantum physics, and may have other useful applications, but, the author states, the 100-qubit quantum computer will not change the world right away and we should regard them as a significant step toward the more powerful quantum technologies of the future.

Published in Blog

Qu&Co comments on this publication:

Quantum Approximate Optimization Algorithm (QAOA) is a variational quantum algorithm that has been heavily investigated due to its potential during the NISQ era. It is designed to find approximate solutions to combinatorial search problems and was first applied to the Max-Cut problem for d-regular graphs. The system is initially prepared in a product state and then p layers of unitaries U(C,γ) and U(B, β) are alternately applied; this can be seen as a Trotterized version of (non-adiabatic) quantum annealing with parametrized annealing pathway.

In these 2 papers by Farhi, Gamarnik & Gutmann posted on April 20 and May 18, the focus problem is to find a large independent set in a random graph of fixed average degree d for the problem of Maximum Independent Set (MIS) on random graphs. Generally, the performance of the QAOA can only improve with depth p, but it is shown that for MIS the algorithm will fail to pass a certain performance barrier if 2p is less than w*log(n)/log(d/ln(2)) for any w <1 with d big enough. The quantum algorithm consists of p unitaries that each respect the locality of the underlying graph. With a fixed average degree of d this means that each qubit typically has an influence sphere of roughly dp other qubits. For qubits further than 2p apart on the graph these influence spheres do not intersect and we can show that measurements of these qubits are uncorrelated, however if p is large enough that dp exceeds n our arguments do not apply and we have no indication that the QAOA will fail.

Overlap Gap Property states that for a given random graph the intersection of any two large independent sets is either big or small, there is no middle ground. Using OGP and the locality of the QAOA, it is shown that if p is less than a d-dependent constant times log n, the QAOA cannot do better than finding an independent set of size .854 times the optimal for d large. Because the logarithm is slowly growing, even at one million qubits we can only show that the algorithm is blocked if p is in single digits. At higher p the algorithm “sees” the whole graph and we have no indication that performance is limited.

The worst case performance circumstances can be easily created to exploit QAOA’s weaknesses. Through construction of operators C and B, QAOA is inherently a local quantum algorithm since when conjugating a single qubit operator produces an operator only involving that qubit and those connected to it on the graph, creating a shallow circuit. This can be exploited to construct examples where the QAOA’s performance is provably below optimal. For example, for Max-Cut when p is a small enough constant times log(n) we show that the approximation ratio is no better than ½ for large d, and for MIS the approximation ratio goes to 0 at large d.

This is an important result for the problem of MIS on random graphs that, although not directly generalizable to other problems, is still valuable for creating bounds for QAOA-applied problems. Knowing that on the average case for MIS, QAOA requires to see the whole graph, therefore require a large p, will change the way QAOA and its potential is viewed for certain problems, and these papers illustrate methods to quantify that.

Published in Blog

Qu&Co comments on this publication:

In recent years many academics and corporates have focus on solving combinatorial optimization problems on quantum-annealing devices like those offered by D-Wave. Now that noisy intermediate scale (NISQ) gate-based quantum-processers (like those of Google, IBM, Rigetti and Intel) are nearing the moment of quantum-supremacy, it is interesting to learn what gate-based quantum-computers can bring to combinatorial optimization problems. In this work, In this paper, Zahedinejad et al. provide a survey of the approaches to solving different types of combinatorial optimization problems, in particular quadratic unconstrained binary optimization (QUBO) problems on a gate model quantum computer. They focus on four different approaches including digitizing the adiabatic quantum computing, global quantum optimization algorithms, the quantum algorithms that approximate the ground state of a general QUBO problem, and quantum sampling. 

Published in Blog

Qu&Co comments on this publication:

In this paper, Matthew Hastings presents a quantum algorithm to exactly solve certain problems in combinatorial optimization, including weighted MAX-2-SAT.  While the time required is still exponential, the algorithm provably outperforms Grover's algorithm assuming a mild condition on the number of low energy states of the target Hamiltonian.

Published in Blog

Qu&Co comments on this publication:

Quantum computers are envisioned to have significantly higher computational capabilities compared to their classical counterparts, especially for optimization and machine learning problems that involve a large classical data set. However, existing quantum algorithms use the trivial methods of turning large classical datasets into either quantum oracles or quantum states which are so expensive that negate any possible quantum advantage. Such quantum algorithms focus at problems in which classical runtime scales rapidly with the input size, perhaps exponentially. To be able to achieve quantum speedup with algorithms like Grover search, a “quantum RAM” is proposed, which is a large classical memory that can be queried in superposition. Although quantum RAMs do not yet exist and creating one might encounter the same challenges that quantum computer hardware faces, it has the potential to provide significant speedup to applications like the k-means clustering, logistical regression, zero-sum games and boosting.

This paper introduces hybrid classical-quantum algorithms for problems involving a large classical data set X and a space of models Y such that a quantum computer has superposition access to Y but not X. Then a data reduction technique is used to construct a weighted subset of X called a coreset that yields approximately the same loss for each model. The coreset can be constructed either by a classical computer or by the combination of classical – quantum computer by utilization of quantum measurements.

The main message of this work is that in order to avoid losing quantum speedup for ‘big-data’ applications, techniques such as data reduction are required, so that the time for loading and storing the data set is limited. Also, non-fault tolerant quantum algorithms should be designed in a way that does not require an excessive amount of gates, so that the algorithm can run before qubits lose their coherence and invalidate the result. The goal of the paper is to draw attention to problems that arise from such actions like testing for quantum advantage when data reduction is used, explore general data reduction techniques and investigate new hybrid classical-quantum algorithms.

Published in Blog

Qu&Co comments on this publication:

Clustering is a form of unsupervised machine learning, where instances are organized into groups whose members share similarities. The assignments are, in contrast to classification, not known a priori, but generated by the algorithm. In this paper, Neukart et al.  present an algorithm for quantum-assisted cluster analysis (QACA) that makes use of the topological properties of a D-Wave 2000Q quantum processing unit (QPU). They explain how the problem can be expressed as a quadratic unconstrained binary optimization (QUBO) problem, and show that the introduced quantum-assisted clustering algorithm is, regarding accuracy, equivalent to commonly used classical clustering algorithms.

Published in Blog
Page 1 of 3

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close