Menu

Quantum-computing related developments

On this page we post about interesting quantum-computing related research and news which we are following.

Quantum resources

Compilation of Fault-Tolerant Quantum Heuristics for Combinatorial Optimization

Quantum error correction

Various heuristic quantum optimization approaches have been suggested to solve combinatorial optimization problems, since the NP hardness of such problems makes heuristics the only viable option for many problems that need to be routinely solved in real-world applications. Usually the performance of the heuristic approach is examined; however, an equally important aspect is their application to current hardware. During the NISQ era, hardware with only a limited number of qubits is available and quantum error correction cannot be fully exploited. Furthermore, the error rates of the quantum circuitry is still high (10-3 – 10-4), which makes the computation inefficient.

This paper reviews existing approaches and develop new methods or improvements of many prominent approaches to combinatorial optimization on a small quantum computer by compiling circuits that implement them and optimizing their fault-tolerant realizations. Many of these methods are bottlenecked by calls to the same subroutines; thus, optimized circuits for those primitives should be of interest regardless of which heuristic is most effective in practice. In essentially all heuristic approaches to quantum optimization there is a primitive that is repeated many times in order to perform the optimization. Instead of investigating how many times those primitives must be repeated, one should focus on the best strategies for realizing those primitives within a fault-tolerant cost model. The goal of this paper is to estimate the performance of an early universal quantum computer for key steps of combinatorial optimization.

These bottlenecks are compiled for several families of optimization problems and it is reported for how long and for what size systems one can perform these heuristics in the surface code given a range of resource budgets. The obtained results discourage the notion that any quantum optimization heuristic realizing only a quadratic speedup will achieve an advantage over classical algorithms on modest superconducting qubit surface code processors without significant improvements in the implementation of the surface code. The essential reason for this is the substantial constant factor slowdown between error-corrected quantum computation and classical computation. Based on these results, we will either need quantum optimization algorithms that afford speedups which are much better than quadratic, or we will need significant improvements in the way that we realize error-correction.

For example, the authors calculate that to implement problems between N = 64 and N = 1024, hundreds of thousands of physical qubits are required when physical gate error rates are on the order of 10-4 and sometimes over a million are required for physical gate error rates on the order of 10-3. Even more concerning is that the number of updates achieved in a day (given realistic cycle times for the error correcting codes) is relatively low, on the order of about ten thousand updates for the smallest instances considered of the cheapest cost functions. With such overheads, these heuristics would need to yield dramatically better improvements in the objective function per step than classical optimization heuristics. Therefore, barring significant advances in the implementation of the surface code (e.g., much faster state distillation), quantum optimization algorithms offering only a quadratic speedup are unlikely to produce any quantum advantage on the first few generations of superconducting qubit surface code processors.

Decoding quantum errors with subspace expansions

Decoding quantum errors with subspace expansions

Quantum error correction

Currently, the latest state-of-the-art quantum computers are so-called NISQ (noisy intermediate-scale quantum) devices, meaning they have a number of qubits which approaches competition with classical simulation of the output of such systems, yet the systems are noisy and no fault-tolerance can be achieved yet. The question is: are there methods which can sufficiently compensate for their noisy nature, enabling the emergence of quantum advantage on these devices? In recent years, many error correction and mitigation schemes have been developed: from Richardson extrapolation techniques to extend results down to `zero noise’, to parity check measurements and more. But typically, those techniques require additional complicated circuitry, ancillary qubits, pulse modifications, or calibration/tuning steps. In this paper, an alternative strategy based on the general principle of a class of methods called Quantum Subspace Expansion (QSE) is proposed. In this strategy, one performs clever post-processing of classical data with or without additional measurements with (at most) simple additional operations in the circuit and no (scaling) ancillary qubits. This paper generalizes the application of QSE error mitigation to any quantum computation, not restricting itself necessarily to problem-specifics like chemistry. Another interesting idea presented here is to use NISQ devices to experimentally study small quantum codes for later use in larger-scale quantum computers implementing error correcting code, such as in future FTQC (fault-tolerant quantum computing).

Improved error threshold for surface codes with biased noise

Improved error threshold for surface codes with biased noise

Quantum error correction

Topological codes, and the surface code in particular, are popular choices for many quantum computing architectures, because of high error thresholds and local stabilizers. In this paper, Tuckett et al. show that a simple modification of the surface code can exhibit a fourfold  gain in the error correction threshold for a noise model in which Pauli Z errors (dephasing) occur more frequently than X or Y errors (which is common in many quantum architectures, including superconducting qubits). For pure dephasing an improved threshold of 43,7% is found (versus 10.9% for the optimal surface code), while 28,2% applies with a noise-bias-ratio of 10 (more realistic regime).

ZX calculus as a language for surface code lattice surgery in quantum error correction

ZX calculus as a language for surface code lattice surgery in quantum error correction

Quantum error correction

One of the most popular techniques for error-correction is the surface code with logical 2-qubit operations realized via so-called lattice surgery. This popularity is explained a.o. by its high estimated error-correction threshold of 1% and relatively simple correction procedure. In this paper, De Beaudrap et al. demonstrate that lattice surgery is a model for the ZX calculus, an abstract graphical language for tensor networks. ZX calculus therefore provides a ready-made practical 'language' for discussing computations realized using surface codes via lattice surgery.