Displaying items by tag: NISQ algorithms

Quantum computers possess unique potential to outperform their classical counterparts with algorithms based on real-time evolution. Given the intrinsically quantum-mechanical relation between the time and energy domains, more focus is on quantum algorithms which focus on using a time-dependent perspective to solve time-independent problems. Hence, the simulation of the time-dependent Schrödinger equation is a more ideal framework to implement.

Presently, there are plenty of quantum algorithms which are based on solving the time-independent Schrödinger equation to determine the Hamiltonian eigenvalues and eigenstates. The majority of these algorithms are limited classically by the exponential scaling of Hilbert space with the system size and require considerable quantum resources to run. So far, methods such as Approximate Imaginary Time Evolution and Krylov diagonalization are more widely used in classical simulation of static phenomena than real-time evolution as the latter has computational limitations. There also exists practical limitations in getting closer to the ground state during the evolution. However, the states generated through real time evolution can provide a basis from which one can extract ground and excited states. In some cases, this method may be faster than other quantum methods that use time evolution as a subroutine for goals other than dynamical behaviour, such as using QPE for spectral analysis.

The authors in this work propose analyze variational quantum phase estimation (VQPE), a method based on real-time evolution for computing ground and excited states, using states generated by real-time evolution. The work consists of theoretical derivations using this method to solve strongly correlated Hamiltonians. The proposed VQPE method has a set of equations that specifies conditions for time evolution with a simple geometrical interpretation. These conditions decouple eigenstates out of the set of time evolved expansion states and connect the method to the classical filter diagonalization algorithm. Furthermore, the authors introduce a unitary formulation of VQPE that allows for a direct comparison to iterative phase estimation. In this formulation, the number of matrix elements that need to be measured scales linearly instead of quadratic with the number of expansion states thereby reducing the number of quantum measurements.

The authors also provide an analysis of the effects of noise on the convergence properties showing that simple regularization techniques suffice to mitigate the effects of noise. Also, the VQPE approach was demonstrated on a wide range of molecules of different complexities, simulating the algorithm classically, as well as the transverse field Ising model on IBM’s quantum simulator and hardware. For several weakly and moderately correlated molecules as well as strongly correlated transition metal dimer Cr2, the chemical accuracy for ground state energies is attained in less than ~50 real-time timesteps. This is comparatively faster than ~106 timesteps required by the state-of-the-art classical methods with orders of magnitude fewer variational parameters.

The results show VQPE as a natural and efficient quantum algorithm for ground and excited state calculations of general many-body systems. On one hand, QPE utilizes its deeper circuits to achieve Heisenberg-limited energy resolution which is comparatively more efficient for achieving high accuracy in overall run time. On the other hand, for the same number of time steps per circuit, VQPE is able to achieve higher accuracy than idealized QPE. It can be concluded that VQPE has near-term advantages as compared to long-term benefits of QPE. This makes it a better candidate for utilizing near-term hardware with shorter circuit depths and fewer available qubits being used as ancillae. By choosing an optimal time step size to generate linearly independent expansion states with each new time evolution, the variational Ansatz can be made compact which sets a lower bound to the time step size. This also minimizes the total simulation time as required by NISQ hardware. This compactness, together with their NISQ compatibility makes VQPE approaches as some of the most promising platforms to perform quantum simulations for many body systems beyond the reach of classical computations. Since real-time evolution is natural to implement on quantum hardware, this approach holds immense promise for NISQ implementation.
Published in Blog
Tagged under
Optimizing quantum algorithms on near-term noisy-intermediate scale quantum (NISQ) devices is an essential requirement to demonstrate the quantum advantage over the existing classical computing. The capabilities of these devices are constrained by high noise levels and limited error mitigation. Combinatorial optimization on quantum processors is one such promising route to solve the problem created by noise in these systems. Out of various existing approaches for optimization, the most notable ones are Quantum Approximate Optimization Algorithm (QAOA) and variational quantum algorithms, especially for eigenvalue problems with high complexity.

The authors in this work propose an iterative “Layer VQE (L-VQE)” approach, inspired by the well-known Variational Quantum Eigensolver (VQE). The work conducts numerical studies, simulating circuits with up to 40 qubits and 352 parameters (which is a hard problem to simulate) using matrix product state (MPS) representation to perform large-scale simulations of the quantum circuits. The performance of L-VQE in this work is simulated using a noisy simulator of a trapped-ion quantum computer.

It has been proven in literature that for a graph with n vertices, solving the k-communities modularity maximization problem requires kn qubits which encode the problem using the well-known Ising model Hamiltonian. The authors of this paper propose a novel formulation that requires only n log(k) qubits. They further compare the performance of L-VQE with Quantum Approximate Optimization Algorithm (QAOA), which is widely considered to be a strong candidate for quantum advantage in applications with NISQ computers. However, the many-body terms in the Hamiltonian make it harder to implement in the QAOA setting. The numerical results suggest that QAOA comparatively achieves lower approximation ratios and requires significantly deeper circuits. This gap can be compensated using the L-VQE approach for NISQ devices.

The proposed L-VQE algorithm in this work starts from a relatively simple and shallow hardware efficient ansatz with a small number of parameterized gates and then adds layers to the ansatz systematically. This differs from most VQE approaches which have an ansatz fixed upfront. The work claims that this approach can make the ansatz more expressive along with reducing the optimization overhead. Furthermore, the numerical results suggest that adding layers of the ansatz increases the probability of finding the ground state or finding the state that is sufficiently close to the ground state. Such an approach is deemed simple enough to be generalized to different quantum architectures. It is numerically shown that the standard VQE is more likely to fail in the presence of sampling noise as compared to L-VQE which is shown to be more robust under sampling noise.

Lastly, further numerical studies in this work claim that the ansatz with entanglement performs better than the ansatz without entanglement, which is intuitive because entanglement is an important resource to quantum computing; even when the target optimal state is a product state, optimization to it may be guided better with an entangling ansatz. These results provide insight into the introduction of additional entangling parameters in VQE for classical problems. It is proposed to break down the barriers in the optimization landscape, making it more convex and therefore more amenable to simple local outer-loop optimizers to find a minimum. This contrasts with the previous results where no beneficial effects of entanglement are observed. This difference in results suggests the importance of the parameterization choice and the overall VQE procedure design contributing to the success of such methods.
Published in Blog
Tagged under
For the Noisy Intermediate Scale Quantum (NISQ) era, in the absence of large-scale quantum error correction, the number of gates that can be applied while maintaining computational coherence is at present strongly limited by hardware noise and decoherence. In an attempt to alleviate some of the detrimental effects, current generations of quantum algorithms often rely on a hybrid classical-quantum approach. Such approaches consider a trial quantum state (ansatz state) with a tractable number of parameters and relatively short circuit depth. These parameters are then optimized in order to approximate a target state as accurately as possible. In most of such applications shown hitherto, the target state was variationally optimized to represent a lowest-energy eigenstate (groundstate) of some quantum Hamiltonian.

However, one can also envision simulating unitary time evolution (or ‘dynamics’) with such variational algorithms. The authors of today’s paper first reference the Time-Dependent Variational Algorithm (TDVA) which encodes the state into a variational circuit and iteratively updates the parameters by solving the corresponding equation of motion. However, a significant drawback of that existing algorithm is that it suffers from an expensive quadratic cost in the total number of variational parameters.

To tackle this problem, the authors in this work introduce a novel hybrid algorithm to simulate the real-time evolution of quantum systems using parameterized quantum circuits. They propose a new method called "projected - Variational Quantum Dynamics" (p-VQD) which realizes an iterative, global projection of the exact time evolution onto the parameterized manifold. In the small time-step limit, this approach is equivalent to the McLachlan’s variational principle and uses it to optimize all parameters at once. This algorithm is shown to overcome the drawbacks of existing approaches as it is both global – it optimizes all parameters at once – and efficient – it scales linearly with the number of parameters. Moreover, it does not require auxiliary (ancilla) qubits and the depth of the circuit is constant throughout all the simulation. They use circuit differentiation to compute gradients analytically and use them for gradient descent optimization.

This global approach potentially extends the scope of existing efficient variational methods, that instead typically rely on the iterative optimization of a restricted subset of variational parameters. The authors claim that this approach can be particularly advantageous over existing global optimization algorithms based on the time-dependent variational principle.

They have shown that the algorithm is asymptotically more hardware efficient than the standard variational algorithm while retaining a higher accuracy. Currently a drawback of this method is that the circuit constructed on the quantum device is approximately twice as deep as the ansatz used to represent the system. However, by suitably controlling the number of two-qubit gates in the particular ansatz of choice, the authors comment that p-VQD is already implementable to simulate small quantum systems on available devices.

One possible application of the approach used in this work is to study the dynamical properties of two-dimensional interacting systems which is a notoriously difficult problem for classical computation. Similar to all other variational algorithms, the choice of the right parametrization is fundamental for the algorithm to succeed. In this sense, having an efficient quantum algorithm to perform variational time evolution is essential to compare to classical results obtained with variational states based on tensor or neural networks.
Published in Blog
Tagged under
In this edition of Active Quantum Research Areas (AQRAs), we discuss recent research on barren plateaus in variational quantum algorithms.

Parametrized quantum circuits (PQCs) are employed in Quantum Neural Networks (QNNs) and Variational Quantum Algorithms (VQAs), which are typical researched algorithms that may allow for near-term quantum advantage. QNNs have been recently proposed as generalizations of classical neural networks that can potentially analyse quantum data more efficiently. VQAs have been proposed as a more resilient to noise circuit for the NISQ era.

Both algorithms may rely on gradient based optimization to iteratively minimize a cost function evaluated by measuring output(s) of a quantum processor. Training PQCs involves a hybrid quantum-classical optimization loop. Typically, the problem is encoded in a cost (or loss) function that is ideally efficient to evaluate on a quantum computer but computationally expensive for a classical one. While the quantum computer evaluates the cost, a classical optimizer updates some (usually continuous) parameters associated with the quantum operations. PQCs with fixed gate structure are often referred to as a variational ansatze.

A `barren plateau' is the phenomenon of exponentially vanishing gradients in sufficiently expressive parametrized quantum circuits and it was first coined in the context of quantum circuits in [McClean2018]. It has been established that the onset of a barren plateau regime depends on the cost function, although the particular behaviour has been demonstrated only for certain classes of cost functions. Barren plateau landscapes correspond to gradients that vanish exponentially in the number of qubits. Therefore, an exponentially large precision is needed to navigate through the landscape. Such landscapes have been demonstrated for variational quantum algorithms and quantum neural networks with either deep circuits or global cost functions. Even if one manages to avoid these barren plateaus, there is an additional difficulty of optimizing in the presence of the hardware noise that defines NISQ devices, which is expected to modify the cost landscape.

The classical optimization step, integral to these algorithms, can be implemented using either gradient-free or gradient-based methods. At first glance, the use of the former seems more effective as the evaluation of the cost function is subject to imperfection(s). However, further research developed methods of evaluating gradients analytically (i.e. without relying on finite differences), and the access to gradients was proven to speed up the convergence to local minima.

In the absence of a barren plateau, the determination of a minimizing direction in the cost function landscape does not require an exponentially large precision, meaning that one can always navigate through the landscape by measuring expectation values with a precision that grows at most polynomially with the system size. Polynomial overhead is the standard goal for quantum algorithms, which aim to achieve a speedup over classical algorithms that often scale exponentially. Hence, the absence or existence of a barren plateau can determine whether or not a quantum speedup is achievable.

Recently, particularly in the past month of November 2020, many interesting papers appeared on the arXiv describing research on these barren plateaus, which we briefly summarize here.

In [Pesah2020], the authors analyse gradient scaling for the parameters of their proposed Quantum Convolutional Neural Network (QCNN). The main result of this research is the proposition of a novel method for analyzing the scaling of the variance and lower bounding the variance for the QCNN architecture.

In [Zhang2020], the authors propose a tree tensor architecture for QNNs. The main result is the proof that tree tensor QNNs have gradients that vanish polynomially with the qubit number, irrespective of which encoding methods are employed.

In [Fontana2020], the authors examine ways to exploit the symmetries and degeneracies that exist in PQCs and investigate how hardware noise affect such symmetries. The main result is that they find an exponentially large symmetry in PQCs, yielding an exponentially large degeneracy of the minima in the cost landscape. Also, they show that noise (specifically non-unital noise) can break these symmetries and lift the degeneracy of minima, making many of them local minima instead of global minima. The main contribution is an novel optimization method, which exploits underlying symmetries in PQCs.

In [Uvarov2020], the authors derive a lower bound on the variance of the gradient, which depends mainly on the width of the circuit causal cone of each term in the Pauli decomposition of the cost function. The main contribution is that this result further clarifies the conditions under which barren plateaus can occur. Given a Hamiltonian, they are able to estimate its susceptibility to the barren plateaus, therefore one can pre-process Hamiltonians in order to make the optimization more viable.

In [Arrasmith2020], the authors answer the question of whether only gradient based optimizers suffer from the vanishing gradient phenomenon. The main result is that gradient-free optimizers do not solve the barren plateau problem and prove that cost function differences are exponentially suppressed in a barren plateau. Hence, without exponential precision, gradient-free optimizers will not make progress in the optimization.

In [Anand2020], the authors explore Natural Evolutionary Strategies (NES) for the optimization of randomly-initialized PQCs in the region of vanishing gradients. The main result is that using the NES gradient estimator the exponential decrease in variance can be alleviated. They compare their two approaches, namely the exponential and separable NES, against standard gradient descent and show that optimization of randomly initialized PQCs can be performed with significantly less circuit evaluations using NES, while achieving comparable accuracy to gradient based methods. They also show that NES methods can be used to amplify gradients and improve parameter initialization for gradient-based approaches.

It is clear from the ongoing research that the vanishing gradient problem is substantial when working with PQCs, which in turn seems to be a crucial aspect for proving quantum advantage in the NISQ era. Although different approaches have been suggested to overcome the limitations arising from barren plateaus, there is not one simple way to do it. Most of the strategies discussed, focus on the near term heuristic solutions that can be obtained from PQCs, with the main goal being a polynomial instead of an exponential overhead with the number of qubits for the precision of the cost function, which is a way one can prove quantum advantage. This is done by employing new optimization strategies, optimizers, re-design of the Hamiltonian/ansatz, exploitation of symmetries and many other ways. Researchers in this field will likely keep inventing new ways to avoid or mitigate barren plateaus for the foreseeable future, and these recent developments have ushered in a new era of understanding of this domain.

References:
[Anand2020] Natural Evolutionary Strategies for Variational Quantum Computation, by Abhinav Anand, Matthias Degroote and Alán Aspuru-Guzik, arXiv:2012.00101
[Arrasmith2020] Effect of barren plateaus on gradient-free optimization, by Andrew Arrasmith, M. Cerezo, Piotr Czarnik, Lukasz Cincio, Patrick J. Coles, arXiv:2011.12245
[Fontanta2020] Optimizing parametrized quantum circuits via noise-induced breaking of symmetries, Enrico Fontana, M. Cerezo, Andrew Arrasmith, Ivan Rungger, Patrick J. Coles, arXiv:2011.08763
[Pesah2020] Absence of Barren Plateaus in Quantum Convolutional Neural Networks, by Arthur Pesah, M. Cerezo, Samson Wang, Tyler Volkoff, Andrew T. Sornborger and Patrick J. Coles, arXiv:2011.02966
[McClean2018] Barren plateaus in quantum neural network training landscapes, by McClean, J.R., Boixo, S., Smelyanskiy, V.N., R. Babbush and H. Neven, Nat Commun 9, 4812 (2018)
[Uvarov2020] On barren plateaus and cost function locality in variational quantum algorithms, by Alexey Uvarov and Jacob Biamonte, arXiv:2011.10530
[Zhang2020] Toward Trainability of Quantum Neural Networks, by Kaining Zhang, Min-Hsiu Hsieh, Liu Liu and Dacheng Tao, arXiv:2011.06258
Published in Blog
Tagged under

Qu&Co comments on this publication:

In this paper, the meta-VQE algorithm is presented, which is an adaptation of the Variational Quantum Eigensolver (VQE). VQE is a variational algorithm that was originally proposed for finding the ground state energy of a given Hamiltonian by variationally minimizing its expectation value with a parametrized quantum circuit. The cost function of VQE is the expected value of the model Hamiltonian. The variational principle states that this value is an upper bound of the ground state energy, so everything reduces to minimize this value by fine-tuning the parameters of the circuit.

The meta-VQE algorithm is inspired by quantum machine learning algorithms (QML) and is able to learn the ground state energy profile of a parametrized Hamiltonian. First, this circuit is trained with a set of Hamiltonian parameters, which are encoded in the gates of an encoding unitary. By designing a cost function with the expected values of all these Hamiltonian training points, the algorithm extracts the optimal values of the variational parameters. Then, one can compute the energy for other Hamiltonian values by just running the meta-VQE circuit with the parameters obtained in the minimization. In addition, one can also use the result of a meta-VQE training as a starting point of a standard VQE algorithm, the opt-meta-VQE, instead of random initialization.

This technique can make the algorithm converge to the correct solution, while previous variants of VQE often suffer from convergence issues due to the exponential increase of the Hilbert space paired with a random initialized ansatz, typically resulting in a limited probability of finding the ground state as the quantum system increases. The characteristic trait of the meta-VQE is that it can be used to first explore the ground state energies of Hamiltonian parameter space with only a few training points and then use the result as an initial state for a precise VQE, resulting in high precision. Furthermore, meta-VQE can easily be adapted to be a part of other VQE strategies to increase performance. Moreover, the meta-VQE is able to capture global correlation with a few training points alleviating the refined optimization of the individual points in a later step. The authors conclude that the meta-VQE can find the general energy shape but not provide an accurate value, in contrast to standard VQE. However, the opt-meta-VQE proves valuable, achieving better results than standard VQE with random initialization.

Published in Blog
Tagged under

Qu&Co comments on this publication:

Quantum computational supremacy (QCS) arguments have been provided to demonstrate that quantum computers will soon outperform classical computers in a variety of algorithms. However, in order to truly prove supremacy, several strict measures need to be taken: an appropriate algorithm must be selected, a quantum device to run the algorithm must be designed, the ability to verify the results of the calculations must be considered and a complexity theory that supports the claim that a classical computer would be unable to run such an algorithm should be provided. Quantum circuits running on quantum computing chips that are currently experimentally realized might still be able to be simulated in highly parallelized state-of-the-art classical supercomputers, therefore one can only make conjectures about QCS at the moment. Typically, classical simulation of certain families of quantum circuits require scaling that is worse than any polynomial in the size of the circuit, which prevents us from calculating exactly the number of qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers.

In this paper, three refined fine-grained conjectures about quantum supremacy are provided and it is calculated that 208 qubits and 500 gates for Instantaneous Quantum Polynomial-Time (IQP) circuits, 420 qubits and 500 constraints for Approximate Optimization Algorithm (QAOA) circuits and 98 photons and 500 optical elements are sufficient. Although noise in current quantum devices cannot be fully approximated, a lower bound on the runtime of all three algorithms for any multiplicative-error classical simulation is provided.

This paper provides a concrete estimation on the number of qubits required for three algorithms that have gained a lot of attention during the NISQ era. While the orginal work stems from 2018,  the number of qubits required has been recalculated in the newest version of this paper, which provides a good indication of how fidelity of quantum chips has been improved in the last two years, as well as the latest understanding in complexity and the on-going evolution in classical competition.

Published in Blog

Qu&Co comments on this publication:

In this article, John Preskill provides his view on the near-term (5-10 years ahead) societal and commercial impact of quantum-computers. Specifically, the author focuses on what he calls Noisy Intermediate-Scale Quantum (NISQ) technology: quantum computers with 50-100 qubits for which noise in quantum gates will limit the size of quantum circuits that can be executed reliably. Such NISQ devices may be able to perform tasks which surpass the capabilities of today’s classical digital computers and will be useful tools for exploring e.g. many-body quantum physics, and may have other useful applications, but, the author states, the 100-qubit quantum computer will not change the world right away and we should regard them as a significant step toward the more powerful quantum technologies of the future.

Published in Blog

Qu&Co comments on this publication:

Quantum Approximate Optimization Algorithm (QAOA) is a variational quantum algorithm that has been heavily investigated due to its potential during the NISQ era. It is designed to find approximate solutions to combinatorial search problems and was first applied to the Max-Cut problem for d-regular graphs. The system is initially prepared in a product state and then p layers of unitaries U(C,γ) and U(B, β) are alternately applied; this can be seen as a Trotterized version of (non-adiabatic) quantum annealing with parametrized annealing pathway.

In these 2 papers by Farhi, Gamarnik & Gutmann posted on April 20 and May 18, the focus problem is to find a large independent set in a random graph of fixed average degree d for the problem of Maximum Independent Set (MIS) on random graphs. Generally, the performance of the QAOA can only improve with depth p, but it is shown that for MIS the algorithm will fail to pass a certain performance barrier if 2p is less than w*log(n)/log(d/ln(2)) for any w <1 with d big enough. The quantum algorithm consists of p unitaries that each respect the locality of the underlying graph. With a fixed average degree of d this means that each qubit typically has an influence sphere of roughly dp other qubits. For qubits further than 2p apart on the graph these influence spheres do not intersect and we can show that measurements of these qubits are uncorrelated, however if p is large enough that dp exceeds n our arguments do not apply and we have no indication that the QAOA will fail.

Overlap Gap Property states that for a given random graph the intersection of any two large independent sets is either big or small, there is no middle ground. Using OGP and the locality of the QAOA, it is shown that if p is less than a d-dependent constant times log n, the QAOA cannot do better than finding an independent set of size .854 times the optimal for d large. Because the logarithm is slowly growing, even at one million qubits we can only show that the algorithm is blocked if p is in single digits. At higher p the algorithm “sees” the whole graph and we have no indication that performance is limited.

The worst case performance circumstances can be easily created to exploit QAOA’s weaknesses. Through construction of operators C and B, QAOA is inherently a local quantum algorithm since when conjugating a single qubit operator produces an operator only involving that qubit and those connected to it on the graph, creating a shallow circuit. This can be exploited to construct examples where the QAOA’s performance is provably below optimal. For example, for Max-Cut when p is a small enough constant times log(n) we show that the approximation ratio is no better than ½ for large d, and for MIS the approximation ratio goes to 0 at large d.

This is an important result for the problem of MIS on random graphs that, although not directly generalizable to other problems, is still valuable for creating bounds for QAOA-applied problems. Knowing that on the average case for MIS, QAOA requires to see the whole graph, therefore require a large p, will change the way QAOA and its potential is viewed for certain problems, and these papers illustrate methods to quantify that.

Published in Blog

Qu&Co comments on this publication:

In recent years many academics and corporates have focus on solving combinatorial optimization problems on quantum-annealing devices like those offered by D-Wave. Now that noisy intermediate scale (NISQ) gate-based quantum-processers (like those of Google, IBM, Rigetti and Intel) are nearing the moment of quantum-supremacy, it is interesting to learn what gate-based quantum-computers can bring to combinatorial optimization problems. In this work, In this paper, Zahedinejad et al. provide a survey of the approaches to solving different types of combinatorial optimization problems, in particular quadratic unconstrained binary optimization (QUBO) problems on a gate model quantum computer. They focus on four different approaches including digitizing the adiabatic quantum computing, global quantum optimization algorithms, the quantum algorithms that approximate the ground state of a general QUBO problem, and quantum sampling. 

Published in Blog

Qu&Co comments on this publication:

In this paper, Matthew Hastings presents a quantum algorithm to exactly solve certain problems in combinatorial optimization, including weighted MAX-2-SAT.  While the time required is still exponential, the algorithm provably outperforms Grover's algorithm assuming a mild condition on the number of low energy states of the target Hamiltonian.

Published in Blog
Page 1 of 3

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close