Displaying items by tag: Quantum hardware

Recently, a team from the University of Science and Technology of China (USTC) demonstrated an experiment on a photonic quantum computer that outperformed even the fastest classical supercomputer in a computational task. Such kind of experiments are targeting algorithms and hardware platforms that can provide “quantum supremacy”, which occurs when a quantum computer is outperforming a classical computer.

A photonic quantum computer harnesses particles of light (photons) and consists of a complex array of optical devices, such as light sources, beam splitters, mirrors and photon detectors, that shuttle photons around. In such a computer, the quantum computation is based on a process called Boson Sampling, which is a task deliberately designed to prove quantum supremacy. Boson sampling is trying to understand what the distribution of photons is going to be at the output of a photonic interferometer. In the case of the quantum device implementation of boson sampling, the problem is solved `by itself’ since the distribution of the measured output is the desired photon distribution. In the case of the classical computer, a large computation is required to find the photon distribution, which increases with the size of the problem since the photon’s quantum properties lead to an exponentially increasing number of possible distributions. If operated with large numbers of photons and many channels, the quantum computer will produce a distribution of numbers that is too complex for a classical computer to calculate. In the new experiment, up to 76 photons traversed a network of 100 channels, which is a much larger amount than previously demonstrated, both experimentally and numerically.

This claim for quantum supremacy comes to reinforce what Google presented last year with their superconducting qubit-based quantum computer. The main difference between the two experiments in terms of the result is that the photonics experiment can create many more possible output states: ~1030 of them compared to ~1016. Such a large number makes it infeasible to calculate the whole probability distribution over outputs and store it for future generation of samples (something other researchers suggested as a rebuttal against Google’s claims, but which can certainly not hold in this new experiment).

Although researchers are currently looking for ways to get similar results with classical computers, it has not yet been successful. The main concern around this quantum experiment is the photon loss. It was reported that up to ~70% of the photons get lost on their way through the beam splitter network, allowing only ~30% to be detected. Typically, that amount of photon loss would be considered fatal for quantum supremacy. Furthermore, the classical simulations that are used for comparisons require fixing the rate of noise and then letting the numbers of photons and modes go to infinity. However, any real experiment has a fixed number of photons and modes (in USTC’s case, they’re ~50 and ~100 respectively).

Achieving the goal of quantum supremacy through such kind of experiments does not indicate the definitive, general, superiority of quantum computers over classical computers, since such kind of problems are deliberately designed to be hard for classical computers. On the other hand, it would also be an understatement to say this experiment is `only a proof of principle’, since boson sampling could have actual practical applications, for example solving specialized problems in quantum chemistry and mathematics.

Currently, most proposals in the literature apply boson sampling to vibronic spectra or finding dense subgraphs, but it is not certain whether these proposals will yield real speedups for a task of practical interest that involves estimating specific numbers (as opposed to sampling tasks, where boson sampling almost certainly does yield exponential speedups).

Future research will focus both on algorithm development, exploiting the particular characteristics of such a specialized quantum device, as well as experimental improvements such as decreased photon loss, higher quality sources and detectors, and larger number of modes. The described experiment presents a promising indication of this sub-field of quantum computing, and we keep a close eye on future developments.
Published in Blog
In this paper, researchers from Amazon AWS & IQIM present an architecture for a fault-tolerant quantum computer, that is based on hybrid acoustic-electro devices to implement a stabilized cat code with highly biased noise, dominated by dephasing. To combat these sources of noise, they concatenated the cat code with an outer code that focuses mostly on correcting the dephasing errors, based on the repetition code and the surface code. The assumed error model is critical, since it will affect the fidelities of all required operations (initialization, measurement, gates, etc.) based on which the results are compared to previous works. Therefore, a detailed error analysis of measurements and gates, including CNOT and Toffoli gates is presented according to this realistic noise model.

Fault-tolerant quantum computing requires a universal set of gates, which can be divided into two categories, namely gates that belong to the Clifford group and gates that do not. Clifford gates can be typically achieved easily for a variety of codes, however non-Clifford gates require sophisticated protocols to create and then purify to increase their fidelity, like the magic state preparation/distillation protocol. A novel magic-state distillation protocol for Toffoli states is introduced here (injected via lattice surgery), which in combination with the error correction techniques that were used, result in a lower overhead compared to previous works. Actually, it is estimated that the factory that generates the magic states only accounts for approximately 7% of the total resource overhead requirements, with the other 93% coming from the rotated surface code.

In terms of quantum advantage, the authors find that with around 1,000 superconducting circuit components, one could construct a fault-tolerant quantum computer that can run circuits which are intractable for classical supercomputers.

However, when comparing this work to other related works, one should keep in mind that the assumed gate fidelities and the assumed error model can greatly affect the presented results. The error model in this work assumes Z error rates that are far less optimistic than typically assumed for transmon qubits and due to the cat state encoding there is bit-flip noise suppression that can naturally lead to increased performance. Furthermore, transmon architecture resource estimates are based on a simple depolarizing noise model, whereas this noise model has been derived from first principles modeling of the hardware, making the analysis more realistic.

Moreover, the authors claim to have a comparable or up to 3 times fewer qubits required compared to other superconducting transmon qubit architectures, based on the assumed gate fidelities. Also, similar runtime figures are reported compared to other superconducting transmon qubit architectures, however an important distinction of this protocol is that the magic states are created slightly faster than they can be transported to the main algorithm, whereas in other architectures the main algorithm has to wait for the magic states to be created which is a bottleneck in the runtime.

Although such a protocol shows promise for fault-tolerant quantum computing, the injection of magic states comes with an additional qubit cost for data access and routing. The choice of routing solution leads to a lower bound on runtime execution, so more careful optimization of routing costs and speed of magic state injection is crucial.
Published in Blog

Qu&Co comments on this publication:

Quantum dot based spin qubits may offer significant advantages due to their potential for high densities, all-electrical operation, and integration onto an industrial platform. However, in quantum-dots, charge and nuclear spin noise are dominant sources of decoherence and gate errors. Silicon naturally has few nuclear spin isotopes, which can be removed through purification. As a host material, Silicon, enables single-qubit gate fidelities above 99%. In this paper, Watson et al. demonstrate a programmable two-qubit quantum processor in silicon by performing both the Deutsch-Josza and the Grover search algorithms.

Published in Blog
Tagged under

Qu&Co comments on this publication:

So-called holonomic quantum gates based on geometric phases are robust against control-errors. Zanardi and Rasetti, first proposed the adiabatic holonomic quantum computation (AHQC), which has the unavoidable challenge of long run-time needed for adiabatic evolution increasing the vulnerability to decoherence. Therefore non-adiabatic HQC schemes, with much shorter gate-times, were proposed and realized in platforms based on trapped ions, NMR, superconducting circuits and nitrogen-vacancy centers in diamond. In this paper, Zhao et al. propose a non-adiabatic HQC scheme based on Rydberg atoms, which combines robustness to control-errors, short gate times and long coherence times.

Published in Blog
Tagged under

Qu&Co comments on this publication:

In this paper, Puri et al. propose an alternative to the typical quantum annealing architecture with a scalable network of all-to-all connected, two-photon driven Kerr-nonlinear resonators. Each of these resonators encode an Ising spin in a robust degenerate subspace formed by two coherent states of opposite phases. A fully-connected optimization problem is mapped onto local fields driving the resonators, which are themselves connected by local four-body interactions. They describe an adiabatic annealing protocol in this system and analyze its performance in the presence of photon loss. Numerical simulations indicate substantial resilience to this noise channel, making it a promising platform for implementing a large scale quantum Ising machine.

Published in Blog
Tagged under

Qu&Co comments on this publication:

Networks of coupled optical parametric oscillators (OPOs) are an alternative physical system for solving Ising type problems. Theoretical/numerical investigations have shown that in principle quantum effects (like entanglement between delay-coupled pulses) can play meaningful roles in such systems. In this paper, McMahon et al. (and an earlier paper of Inagaki et al.), show that this type of architecture is relatively scalable and can be used to solve max cut problems accurately, although in the current prototype devices the quantum features are 'washed out' by high round-trip losses (typically 10 dB), to the point that a purely semi-classical description of the system is sufficient to explain all the observed experimental results. The next step would be to realize this architecture in a system where the quantum nature is not lost.

Published in Blog
Tagged under

Qu&Co comments on this publication:

Transition metal dichalcogenide monolayers (TMDC) are atomic-thin two-dimensional materials in which electrostatic quantum dots (QD) can be created. The electrons or holes confined in these QD have not only a spin degree of freedom, but also a valley degree of freedom. This additional degree of freedom can be used to encode a qubit creating a new field of electronics called valleytronics. In this paper Pawlowski et al. show how to create a QD in a MoS2 monolayer material and how to perform the NOT operation on its valley degree of freedom.

Published in Blog
Tagged under

Qu&Co comments on this publication:

Majorana bound states are quasi-particles, which obey non-Abelian braiding statistics (meaning they are neither bosons nor fermions). Topological quantum computation uses multiple such quasiparticles to store quantum information, where the non-local encoding provides high fault-tolerance (immunity to local perturbations). Unitary gates can be created by braiding. A semiconductor nanowire coupled to a superconductor can be tuned into a topological superconductor with two Majorana zero-modes localized at the wire ends. Tunneling into a Majorana mode will show a robustly quantized zero-bias peak (ZBP) in the differential conductance. In this paper, Zhang et al. are the first to experimentally show the exact theoretically predicted ZBP quantization, which strongly supports the existence of non-Abelian Majorana zero-modes in their system, paving the way for their next challenge: braiding experiments.

Published in Blog
Tagged under

Qu&Co comments on this publication:

In this paper, Wang et al. present experimental results showing genuine multipartite entanglement of up to 16 qubits on the ibmqx5 device, a 16 transmon-qubit universal quantum computing device developed by IBM. Prior to these results entanglement had been reported up to 10 superconducting qubits.

Published in Blog
Tagged under

Qu&Co comments on this publication:

Coupling between superconducting qubits is typically controlled not by changing the qubit-qubit coupling constant, but by suppressing the coupling by detuning their transition frequency. This approach becomes much more difficult with a high number of qubits, due to the ever-more crowded transition-frequency spectrum. In this paper, Casparis et al. demonstrate an alternative coupling scheme, in the form of a voltage controlled quantum-bus with the ability to change the effective qubit-qubit coupling by a factor of 8 between the on- and off-states without causing significant qubit decoherence.

Published in Blog
Tagged under
Page 1 of 2

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close