Displaying items by tag: Optimization

Qu&Co comments on this publication:

Quantum Approximate Optimization Algorithm (QAOA) is a variational quantum algorithm that has been heavily investigated due to its potential during the NISQ era. It is designed to find approximate solutions to combinatorial search problems and was first applied to the Max-Cut problem for d-regular graphs. The system is initially prepared in a product state and then p layers of unitaries U(C,γ) and U(B, β) are alternately applied; this can be seen as a Trotterized version of (non-adiabatic) quantum annealing with parametrized annealing pathway.

In these 2 papers by Farhi, Gamarnik & Gutmann posted on April 20 and May 18, the focus problem is to find a large independent set in a random graph of fixed average degree d for the problem of Maximum Independent Set (MIS) on random graphs. Generally, the performance of the QAOA can only improve with depth p, but it is shown that for MIS the algorithm will fail to pass a certain performance barrier if 2p is less than w*log(n)/log(d/ln(2)) for any w <1 with d big enough. The quantum algorithm consists of p unitaries that each respect the locality of the underlying graph. With a fixed average degree of d this means that each qubit typically has an influence sphere of roughly dp other qubits. For qubits further than 2p apart on the graph these influence spheres do not intersect and we can show that measurements of these qubits are uncorrelated, however if p is large enough that dp exceeds n our arguments do not apply and we have no indication that the QAOA will fail.

Overlap Gap Property states that for a given random graph the intersection of any two large independent sets is either big or small, there is no middle ground. Using OGP and the locality of the QAOA, it is shown that if p is less than a d-dependent constant times log n, the QAOA cannot do better than finding an independent set of size .854 times the optimal for d large. Because the logarithm is slowly growing, even at one million qubits we can only show that the algorithm is blocked if p is in single digits. At higher p the algorithm “sees” the whole graph and we have no indication that performance is limited.

The worst case performance circumstances can be easily created to exploit QAOA’s weaknesses. Through construction of operators C and B, QAOA is inherently a local quantum algorithm since when conjugating a single qubit operator produces an operator only involving that qubit and those connected to it on the graph, creating a shallow circuit. This can be exploited to construct examples where the QAOA’s performance is provably below optimal. For example, for Max-Cut when p is a small enough constant times log(n) we show that the approximation ratio is no better than ½ for large d, and for MIS the approximation ratio goes to 0 at large d.

This is an important result for the problem of MIS on random graphs that, although not directly generalizable to other problems, is still valuable for creating bounds for QAOA-applied problems. Knowing that on the average case for MIS, QAOA requires to see the whole graph, therefore require a large p, will change the way QAOA and its potential is viewed for certain problems, and these papers illustrate methods to quantify that.

Published in Blog

Qu&Co comments on this publication:

In this paper, Venturelli et al. present a quantum annealing solver for the renowned job-shop scheduling problem (JSP). They formulate the problem as a time-indexed quadratic unconstrained binary optimization problem, several pre-processing and graph embedding strategies are employed to compile optimally parametrized families of the JSP for scheduling instances of up to six jobs and six machines on the D-Wave Systems Vesuvius (DW2) processor. Problem simplifications and partitioning algorithms are discussed and the results from the processor are compared against state-of-the-art global-optimum solvers.

Published in Blog

Qu&Co comments on this publication:

Mean-variance portfolio optimization problems are traditionally solved as continuous-variable problems. However, for assets that can only be traded in large lots, or for asset managers who are constrained to trading large blocks of assets, solving the continuous problem yields an approximation. The discrete problem, is expected to provide better results, but is non-convex due to the fragmented nature of the domain, and is therefore much harder to solve. In this paper, Rosenberg et al. attempt to solve a discrete multi-period portfolio optimisation problem using D-Wave Systems' quantum annealer. They derive a formulation of the problem, discuss several possible integer encoding schemes, and present numerical examples that show high success rates. They also present insight into how results may be improved using suitable software enhancements, and why current quantum annealing technology limits the size of problem that can be successfully solved today. The formulation presented is specifically designed to be scalable, with the expectation that as quantum annealing technology improves, larger problems will be solvable using the same techniques. 

Published in Blog

Qu&Co comments on this publication:

In this paper, Andrew Lucas provides Ising formulations for many NP-complete and NP-hard problems, including all of Karp's 21 NP-complete problems. In each case, the required number of spins is at most cubic in the size of the problem. This work may be useful in designing adiabatic quantum optimization algorithms.

Published in Blog
Tagged under

Qu&Co comments on this publication:

In recent years many academics and corporates have focus on solving combinatorial optimization problems on quantum-annealing devices like those offered by D-Wave. Now that noisy intermediate scale (NISQ) gate-based quantum-processers (like those of Google, IBM, Rigetti and Intel) are nearing the moment of quantum-supremacy, it is interesting to learn what gate-based quantum-computers can bring to combinatorial optimization problems. In this work, In this paper, Zahedinejad et al. provide a survey of the approaches to solving different types of combinatorial optimization problems, in particular quadratic unconstrained binary optimization (QUBO) problems on a gate model quantum computer. They focus on four different approaches including digitizing the adiabatic quantum computing, global quantum optimization algorithms, the quantum algorithms that approximate the ground state of a general QUBO problem, and quantum sampling. 

Published in Blog

Qu&Co comments on this publication:

In this paper, Matthew Hastings presents a quantum algorithm to exactly solve certain problems in combinatorial optimization, including weighted MAX-2-SAT.  While the time required is still exponential, the algorithm provably outperforms Grover's algorithm assuming a mild condition on the number of low energy states of the target Hamiltonian.

Published in Blog

Qu&Co comments on this publication:

Quantum annealers such as the D-Wave 2X allow solving NP-hard optimization problems that can be expressed as quadratic unconstrained binary (QUBO) programs. However, the relatively small number of available qubits poses a severe limitation to the range of problems that can be solved. In this paper, Hahn et al. explore the suitability of preprocessing methods for reducing the sizes of the input programs and thereby the number of qubits required for their solution on quantum computers. Specifically preprocessing reductions are discussed for max. clique and max. cut problems.

Published in Blog

Qu&Co comments on this publication:

In this paper, Djidjev et al. evaluate the performance of the D-Wave 2X quantum annealer on two NP-hard graph problems: clique finding and graph partitioning. Overall, they conclude that general problems which allow to be mapped onto the D-Wave architecture are typically still too small to show a quantum speedup (although the D-wave does provide similar quality solutions as the classical solvers). For simple simulated annealing algorithms, D-Wave is considerably faster and selected instances especially designed to fit D-Wave's particular chimera architecture can be solved orders of magnitude faster than with classical techniques.

Published in Blog

Qu&Co comments on this publication:

Various heuristic quantum optimization approaches have been suggested to solve combinatorial optimization problems, since the NP hardness of such problems makes heuristics the only viable option for many problems that need to be routinely solved in real-world applications. Usually the performance of the heuristic approach is examined; however, an equally important aspect is their application to current hardware. During the NISQ era, hardware with only a limited number of qubits is available and quantum error correction cannot be fully exploited. Furthermore, the error rates of the quantum circuitry is still high (10-3 – 10-4), which makes the computation inefficient.

This paper reviews existing approaches and develop new methods or improvements of many prominent approaches to combinatorial optimization on a small quantum computer by compiling circuits that implement them and optimizing their fault-tolerant realizations. Many of these methods are bottlenecked by calls to the same subroutines; thus, optimized circuits for those primitives should be of interest regardless of which heuristic is most effective in practice. In essentially all heuristic approaches to quantum optimization there is a primitive that is repeated many times in order to perform the optimization. Instead of investigating how many times those primitives must be repeated, one should focus on the best strategies for realizing those primitives within a fault-tolerant cost model. The goal of this paper is to estimate the performance of an early universal quantum computer for key steps of combinatorial optimization.

These bottlenecks are compiled for several families of optimization problems and it is reported for how long and for what size systems one can perform these heuristics in the surface code given a range of resource budgets. The obtained results discourage the notion that any quantum optimization heuristic realizing only a quadratic speedup will achieve an advantage over classical algorithms on modest superconducting qubit surface code processors without significant improvements in the implementation of the surface code. The essential reason for this is the substantial constant factor slowdown between error-corrected quantum computation and classical computation. Based on these results, we will either need quantum optimization algorithms that afford speedups which are much better than quadratic, or we will need significant improvements in the way that we realize error-correction.

For example, the authors calculate that to implement problems between N = 64 and N = 1024, hundreds of thousands of physical qubits are required when physical gate error rates are on the order of 10-4 and sometimes over a million are required for physical gate error rates on the order of 10-3. Even more concerning is that the number of updates achieved in a day (given realistic cycle times for the error correcting codes) is relatively low, on the order of about ten thousand updates for the smallest instances considered of the cheapest cost functions. With such overheads, these heuristics would need to yield dramatically better improvements in the objective function per step than classical optimization heuristics. Therefore, barring significant advances in the implementation of the surface code (e.g., much faster state distillation), quantum optimization algorithms offering only a quadratic speedup are unlikely to produce any quantum advantage on the first few generations of superconducting qubit surface code processors.

Published in Blog

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close