Displaying items by tag: Error correction

07 January 2021

The cost of universality

The authors of this work provides a comparative study of the overhead of state distillation and code-switching, and enables a deeper understanding of these two approaches to fault-tolerant universal quantum computation (FTQC). FTQC requires a gate set that involves at least one non-Clifford gate, which is typically generated through such protocols. State distillation relies on first producing many noisy encoded copies of a T state (magic state) and then processing them using Clifford operations to output a high fidelity encoded version of the state. The high-fidelity T state can be then used to implement the non-Clifford T gate. State generation and distillation is still a resource intensive process, despite significant recent improvements. A compelling alternative is ‘code-switching’, for example via gauge fixing to a 3D topological code. However, moving to 3D architectures is experimentally difficult, even if it significantly reduces the overhead compared to state distillation.

This work estimates the resources needed to prepare high-fidelity T states encoded in the 2D color code. This is accomplished via either state distillation or code-switching assuming that both approaches are implemented using noisy quantum-local operations in 3D. In particular, these two approaches were compared by simulating noisy circuits built from single-qubit state preparations, unitaries and measurements, and two-qubit unitaries between nearby qubits.

It is reported that the circuit noise threshold achieved for the code-switching implementation in this work is the highest presented to date and is equivalent to the circuit-noise threshold for the state distillation scheme with the 2D color code.

A direct way to switch between the 2D color code and the 3D subsystem color code is explored. The method exploits a particular gauge-fixing of the 3D subsystem color code for which the code state admits a local tensor product structure in the bulk and can therefore be prepared in constant time. The restriction decoder was adapted for the 3D color code with a boundary and produced a high error threshold. However, the failure probability of implementing the T gate with code-switching and the estimated T gate threshold is found to be lower than the one calculated for the state distillation.

The results suggest that code-switching does not offer substantial savings over state distillation in terms of both space overhead, i.e., the number of physical qubits required, and space-time overhead (the space overhead multiplied by the number of physical time units required) for most circuit noise error rates. Code switching holds a lot of promise, but is dependent on the clever design of codes that have transversal implementations of non-Clifford gates, and such a protocol is at the moment inferior to the state distillation. There exist other protocols that rival state distillation, like the pieceable fault-tolerant implementation of the CCZ gate, which in general shows the importance of having a protocol for non-Clifford gate implementation with a relatively small number of qubits and operations.
Published in Blog
Tagged under
In this paper, researchers from Amazon AWS & IQIM present an architecture for a fault-tolerant quantum computer, that is based on hybrid acoustic-electro devices to implement a stabilized cat code with highly biased noise, dominated by dephasing. To combat these sources of noise, they concatenated the cat code with an outer code that focuses mostly on correcting the dephasing errors, based on the repetition code and the surface code. The assumed error model is critical, since it will affect the fidelities of all required operations (initialization, measurement, gates, etc.) based on which the results are compared to previous works. Therefore, a detailed error analysis of measurements and gates, including CNOT and Toffoli gates is presented according to this realistic noise model.

Fault-tolerant quantum computing requires a universal set of gates, which can be divided into two categories, namely gates that belong to the Clifford group and gates that do not. Clifford gates can be typically achieved easily for a variety of codes, however non-Clifford gates require sophisticated protocols to create and then purify to increase their fidelity, like the magic state preparation/distillation protocol. A novel magic-state distillation protocol for Toffoli states is introduced here (injected via lattice surgery), which in combination with the error correction techniques that were used, result in a lower overhead compared to previous works. Actually, it is estimated that the factory that generates the magic states only accounts for approximately 7% of the total resource overhead requirements, with the other 93% coming from the rotated surface code.

In terms of quantum advantage, the authors find that with around 1,000 superconducting circuit components, one could construct a fault-tolerant quantum computer that can run circuits which are intractable for classical supercomputers.

However, when comparing this work to other related works, one should keep in mind that the assumed gate fidelities and the assumed error model can greatly affect the presented results. The error model in this work assumes Z error rates that are far less optimistic than typically assumed for transmon qubits and due to the cat state encoding there is bit-flip noise suppression that can naturally lead to increased performance. Furthermore, transmon architecture resource estimates are based on a simple depolarizing noise model, whereas this noise model has been derived from first principles modeling of the hardware, making the analysis more realistic.

Moreover, the authors claim to have a comparable or up to 3 times fewer qubits required compared to other superconducting transmon qubit architectures, based on the assumed gate fidelities. Also, similar runtime figures are reported compared to other superconducting transmon qubit architectures, however an important distinction of this protocol is that the magic states are created slightly faster than they can be transported to the main algorithm, whereas in other architectures the main algorithm has to wait for the magic states to be created which is a bottleneck in the runtime.

Although such a protocol shows promise for fault-tolerant quantum computing, the injection of magic states comes with an additional qubit cost for data access and routing. The choice of routing solution leads to a lower bound on runtime execution, so more careful optimization of routing costs and speed of magic state injection is crucial.
Published in Blog

Qu&Co comments on this publication:

Recently, promising experimental results have been shown for quantum-chemistry calculations using small, noisy quantum processors. As full scale fault-tolerant error correction is still many years away, near-term quantum computers will have a limited number of qubits, and each qubit will be noisy. Methods that reduce noise and correct errors without doing full error correction on every qubit will help extend the range of interesting problems that can be solved in the near-term. In this paper Otten et al. present a scheme for accounting (and removal) of errors in observables determined from quantum algorithms and apply this scheme to the variational quantum eigensolver algorithm, simulating the calculation of the ground state energy of equilibrium H2 and LiH in the presence of several noise sources, including amplitude damping, dephasing, thermal noise, and correlated noise. They show that their scheme provides a decrease in the needed quality of the qubits by up to two orders of magnitude.

Published in Blog

Qu&Co comments on this publication:

Currently, the latest state-of-the-art quantum computers are so-called NISQ (noisy intermediate-scale quantum) devices, meaning they have a number of qubits which approaches competition with classical simulation of the output of such systems, yet the systems are noisy and no fault-tolerance can be achieved yet. The question is: are there methods which can sufficiently compensate for their noisy nature, enabling the emergence of quantum advantage on these devices? In recent years, many error correction and mitigation schemes have been developed: from Richardson extrapolation techniques to extend results down to `zero noise’, to parity check measurements and more. But typically, those techniques require additional complicated circuitry, ancillary qubits, pulse modifications, or calibration/tuning steps. In this paper, an alternative strategy based on the general principle of a class of methods called Quantum Subspace Expansion (QSE) is proposed. In this strategy, one performs clever post-processing of classical data with or without additional measurements with (at most) simple additional operations in the circuit and no (scaling) ancillary qubits. This paper generalizes the application of QSE error mitigation to any quantum computation, not restricting itself necessarily to problem-specifics like chemistry. Another interesting idea presented here is to use NISQ devices to experimentally study small quantum codes for later use in larger-scale quantum computers implementing error correcting code, such as in future FTQC (fault-tolerant quantum computing).

Published in Blog
Tagged under

Qu&Co comments on this publication:

Topological codes, and the surface code in particular, are popular choices for many quantum computing architectures, because of high error thresholds and local stabilizers. In this paper, Tuckett et al. show that a simple modification of the surface code can exhibit a fourfold  gain in the error correction threshold for a noise model in which Pauli Z errors (dephasing) occur more frequently than X or Y errors (which is common in many quantum architectures, including superconducting qubits). For pure dephasing an improved threshold of 43,7% is found (versus 10.9% for the optimal surface code), while 28,2% applies with a noise-bias-ratio of 10 (more realistic regime).

Published in Blog
Tagged under

Qu&Co comments on this publication:

One of the most popular techniques for error-correction is the surface code with logical 2-qubit operations realized via so-called lattice surgery. This popularity is explained a.o. by its high estimated error-correction threshold of 1% and relatively simple correction procedure. In this paper, De Beaudrap et al. demonstrate that lattice surgery is a model for the ZX calculus, an abstract graphical language for tensor networks. ZX calculus therefore provides a ready-made practical 'language' for discussing computations realized using surface codes via lattice surgery.

Published in Blog
Tagged under

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close