Displaying items by tag: Quantum Advantage

Recently, a team from the University of Science and Technology of China (USTC) demonstrated an experiment on a photonic quantum computer that outperformed even the fastest classical supercomputer in a computational task. Such kind of experiments are targeting algorithms and hardware platforms that can provide “quantum supremacy”, which occurs when a quantum computer is outperforming a classical computer.

A photonic quantum computer harnesses particles of light (photons) and consists of a complex array of optical devices, such as light sources, beam splitters, mirrors and photon detectors, that shuttle photons around. In such a computer, the quantum computation is based on a process called Boson Sampling, which is a task deliberately designed to prove quantum supremacy. Boson sampling is trying to understand what the distribution of photons is going to be at the output of a photonic interferometer. In the case of the quantum device implementation of boson sampling, the problem is solved `by itself’ since the distribution of the measured output is the desired photon distribution. In the case of the classical computer, a large computation is required to find the photon distribution, which increases with the size of the problem since the photon’s quantum properties lead to an exponentially increasing number of possible distributions. If operated with large numbers of photons and many channels, the quantum computer will produce a distribution of numbers that is too complex for a classical computer to calculate. In the new experiment, up to 76 photons traversed a network of 100 channels, which is a much larger amount than previously demonstrated, both experimentally and numerically.

This claim for quantum supremacy comes to reinforce what Google presented last year with their superconducting qubit-based quantum computer. The main difference between the two experiments in terms of the result is that the photonics experiment can create many more possible output states: ~1030 of them compared to ~1016. Such a large number makes it infeasible to calculate the whole probability distribution over outputs and store it for future generation of samples (something other researchers suggested as a rebuttal against Google’s claims, but which can certainly not hold in this new experiment).

Although researchers are currently looking for ways to get similar results with classical computers, it has not yet been successful. The main concern around this quantum experiment is the photon loss. It was reported that up to ~70% of the photons get lost on their way through the beam splitter network, allowing only ~30% to be detected. Typically, that amount of photon loss would be considered fatal for quantum supremacy. Furthermore, the classical simulations that are used for comparisons require fixing the rate of noise and then letting the numbers of photons and modes go to infinity. However, any real experiment has a fixed number of photons and modes (in USTC’s case, they’re ~50 and ~100 respectively).

Achieving the goal of quantum supremacy through such kind of experiments does not indicate the definitive, general, superiority of quantum computers over classical computers, since such kind of problems are deliberately designed to be hard for classical computers. On the other hand, it would also be an understatement to say this experiment is `only a proof of principle’, since boson sampling could have actual practical applications, for example solving specialized problems in quantum chemistry and mathematics.

Currently, most proposals in the literature apply boson sampling to vibronic spectra or finding dense subgraphs, but it is not certain whether these proposals will yield real speedups for a task of practical interest that involves estimating specific numbers (as opposed to sampling tasks, where boson sampling almost certainly does yield exponential speedups).

Future research will focus both on algorithm development, exploiting the particular characteristics of such a specialized quantum device, as well as experimental improvements such as decreased photon loss, higher quality sources and detectors, and larger number of modes. The described experiment presents a promising indication of this sub-field of quantum computing, and we keep a close eye on future developments.
Published in Blog

Qu&Co comments on this publication:

Quantum computational supremacy (QCS) arguments have been provided to demonstrate that quantum computers will soon outperform classical computers in a variety of algorithms. However, in order to truly prove supremacy, several strict measures need to be taken: an appropriate algorithm must be selected, a quantum device to run the algorithm must be designed, the ability to verify the results of the calculations must be considered and a complexity theory that supports the claim that a classical computer would be unable to run such an algorithm should be provided. Quantum circuits running on quantum computing chips that are currently experimentally realized might still be able to be simulated in highly parallelized state-of-the-art classical supercomputers, therefore one can only make conjectures about QCS at the moment. Typically, classical simulation of certain families of quantum circuits require scaling that is worse than any polynomial in the size of the circuit, which prevents us from calculating exactly the number of qubits these quantum circuits must have for their classical simulation to be intractable on modern classical supercomputers.

In this paper, three refined fine-grained conjectures about quantum supremacy are provided and it is calculated that 208 qubits and 500 gates for Instantaneous Quantum Polynomial-Time (IQP) circuits, 420 qubits and 500 constraints for Approximate Optimization Algorithm (QAOA) circuits and 98 photons and 500 optical elements are sufficient. Although noise in current quantum devices cannot be fully approximated, a lower bound on the runtime of all three algorithms for any multiplicative-error classical simulation is provided.

This paper provides a concrete estimation on the number of qubits required for three algorithms that have gained a lot of attention during the NISQ era. While the orginal work stems from 2018,  the number of qubits required has been recalculated in the newest version of this paper, which provides a good indication of how fidelity of quantum chips has been improved in the last two years, as well as the latest understanding in complexity and the on-going evolution in classical competition.

Published in Blog

Qu&Co comments on this publication:

Recently some contributors to a paper describing a quantum-supremacy experiment inadvertently posted an older version of this paper online, which was quickly picked-up by the popular press resulting in a flurry of (in many cases) unfounded claims about the progress of quantum-computing. We believe that it is important for people interested in this topic to inform themselves through reading a balanced opinion from someone who is an expert in this field. Therefore we kindly refer to Scott Aaronson's excellent blogpost on this matter. 

Published in Blog
Tagged under

Qu&Co comments on this publication:

In this paper, Neill et al. (Google/UCSB), present experimental results for their 9 transmon (gmon) qubit device and illustrate that these experiments form a blueprint for demonstrating quantum supremacy on their next-generation (50 qubit) system. By individually tuning the qubit parameters, they are able to generate thousands of unique Hamiltonian evolutions and probe the output probabilities. The measured probabilities obey a universal distribution, consistent with uniformly sampling the full Hilbert-space. As the number of qubits in the algorithm is varied, the system continues to explore the exponentially growing number of states. They also compare the measurement results with the expected behavior and show that the algorithm can be implemented with high fidelity.

Published in Blog
Tagged under

Qu&Co comments on this publication:

In this article Boixo et al. (Google/UCSB) present their proposed quantum supremacy experiment. Their proposal uses the task of sampling from the output distributions of (pseudo-) random quantum circuits, which, classically, requires a direct numerical simulation of the circuit, with computational cost exponential in the number of qubits.. They estimate that in such experiment, quantum supremacy can be achieved in the near-term with approximately 50 superconducting qubits. Furthermore they introduce cross entropy as a practical test of quantum supremacy.

Published in Blog
Tagged under

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close