Displaying items by tag: Quantum hardware

金, 05 3月 2021 12:00

Nanophotonic Quantum Computing

In recent years, there have been a number of promising breakthroughs in engineering quantum processors, for example in ion-trap systems and superconducting systems. Physical platforms include programmable machines that can deliver automation, stability, and repeatability to implement quantum algorithms. These machines can now be remotely accessed and loaded with algorithms written in high-level programming languages by users having no in-depth knowledge of the low-level quantum hardware details of the apparatus. These capabilities have rapidly accelerated research that targets application development for near-term quantum computers.

One major limitation of present-day systems is the high level of noise affecting the qubits. Noise severely restricts the efficient application of quantum algorithms even if the algorithms are in-principle compatible with large scale implementations. Some of these algorithms are efficiently encoded in binary modes called qubits, while others are more efficiently expressed in a model in which each independent quantum system is described by a state in an infinite-dimensional Hilbert space. Typical applications that include those are the ones implementing bosonic error correction codes and gaussian boson sampling applications. Physical platforms offered by photonic hardware possess great potential to explore the large-scale physical implementation of such quantum algorithms. An ideal system should be dynamically programmable and readily scalable to hundreds of modes and photons. Also, it should be able to access a class of quantum circuits that are exceedingly harder to be efficiently simulated by classical hardware when the system size is increased. Presently, no such system is known that can achieve all these simultaneously. On one hand, a large photonic cluster state has been demonstrated, with the caveat of being limited to all-Gaussian states, gates, and measurements. On the other, single-photon-based experiments on integrated platforms suffer from non-deterministic state preparation and gate implementation which hinders their scalability.

The authors of this paper offer a full-stack solution consisting of hardware-software co-design based on a programmable nanophotonic chip that includes the capabilities of an ideal system in a single scalable and unified machine. The work involves both experimental research and theoretical modeling of the proposed hardware in order to enhance the required capabilities; programmability, high sampling rate, and photon number resolving. The programmable chip operates at room temperature and is interfaced with an automated control system for executing many-photon quantum circuits. The procedure involves the initial state preparation, gate sequence implementation, and readout followed by verifying the non-classicality of the device’s output. The photonic chip was used to generate squeezed states coupled with a custom modulated pump laser source with active locking system for the on-chip squeezer resonators. Further digital-to-analog converters were used for tuning the asymmetric Mach-Zehnder interferometer filters and programming the four-mode interferometer. This is coupled with a real-time data acquisition system for detector readout. Finally, a master controller (conventional server computer) is used to run a custom-developed control software coordinating the operation of all the hardware. By means of strong squeezing and high sampling rates, multi-photon detection events with photon numbers and rates are observed to be improved, exceeding previous quantum optical demonstrations.

The authors used the platform to carry out proof-of-principle implementations such as Gaussian boson sampling (GBS), resolving molecular vibronic spectra, and solving graph similarity problems which use samples from the device to infer a property of the object central to the application. For GBS, the samples provide information about the nonclassical probability distribution produced by the device. The vibronic spectra algorithm uses outputs from the device to obtain molecular properties, while for graph similarity, the samples reveal information on graph properties. In all demonstrations, the device is programmed remotely using the Strawberry Fields Python library. The authors also theoretically predicted a more detailed model of the device involving two Schmidt modes per squeezer, non-uniform loss before the unitary transformation, and excess noise. Such noise-modelling is still relatively rare in the nanophotonic quantum computing community and their additions to the field are potentially valuable for the understanding of algorithmic performance of such systems as compared to very different hardware implementation.

The proposed device marks a significant advance in scaling such nanophotonic chips to a larger number of modes. One of the greatest challenges in scaling to a system of this size is maintaining acceptably low losses in the interferometer. With precise chip fabrication tools, new designs for integrated beam splitters and phase shifters, one can achieve an order-of-magnitude improvement in the loss per layer in the interferometer. Furthermore, the inclusion of tunable single-mode (degenerate) squeezing and displacement can add a significant upgrade, permitting the generation of arbitrary Gaussian states. Such scaling and upgrades constitute the next steps for near-term photonic quantum information processing demonstrations. Combined with rapid advancements in photonic chip fabrication, such demonstrations coincide with new optimism towards photonics as a platform for advancing the frontier of quantum computation.
Published in Blog
Tagged under
日, 07 2月 2021 12:00

Picosecond Quantum computing

Recently there has been increased efforts to build large-scale quantum computers for solving certain types of hard computational problems. These efforts are mainly motivated by the prospect of enabling quantum algorithms with a quadratic, polynomial or potentially exponential speedup. When the size of the problem is sufficiently large, this scaling advantage implies that a quantum computer will outperform its classical counterpart, independently of the time it takes to execute a single gate. However, for any real-world application, not only the scaling but also the total computation time will be of importance, hence the realization of faster gate operations becomes a necessity to further improve the fidelity of the computation.

In the work we highlight today, the authors discuss the realization of a universal set of ultrafast single- and two-qubit operations with superconducting quantum circuits along with investigating the physical and technical limitations for achieving faster gates. The work establishes a fundamental bound on the minimal gate time, which depends on the qubit nonlinearity and bandwidth of the control pulse over a large parameter range, being independent of the qubit design. The numerical results suggest that for highly anharmonic flux qubits and commercially available control electronics, elementary single- and two-qubit operations can be implemented in about 100 picoseconds with residual gate errors below 10-4. Under the same conditions, authors estimate that the complete execution of a compressed version of Shor’s algorithm for factoring the number 15 would take about one nanosecond on such a hypothetical device.

The numerical results claim that there exists a lower bound for both single-qubit gates and two-qubit gates, which holds without the often assumed three-level approximation for the whole range of qubit parameters explored in this work. For very fast gates in the range of hundred picoseconds, additional limitations arise from the finite qubit oscillation time.
The authors also addressed the implementation of larger quantum circuits composed out of many ultrafast gates. A full multi-level simulation of a basic three-qubit circuit consisting of eleven elementary single- and two-qubit gates is performed taking the finite qubit rotation time into account which introduces a natural cycle time according to which gates must be clocked. For realistic qubit nonlinearities and control bandwidths, the simulated execution times for the whole circuit are observed to be about 1-2 ns, which is about two orders of magnitude faster than what is achievable in most superconducting quantum computing experiments today. The results demonstrate that significant improvements in this direction are still possible.

In their analysis, the authors have restricted the gate times down to about 50 picoseconds, which requires absolute nonlinearities and larger control bandwidth than contemporary state-of-the-art experiments. Although such parameters are highly non-standard for current superconducting qubit experiments, they are still within physical and technological bounds. For the implementation of even faster gates, additional physical constraints will come into play and also the applicability of the usual effective circuit model must be re-evaluated. For example, at a certain value, the energy of the third circuit level is comparable to twice the superconducting gap of aluminum. Any components of the control field above this frequency will excite quasiparticles and strongly degrade the qubit coherence. However, in other materials the superconducting gap can be substantially higher, which suggests that at least in principle, gate times in the range of 1-10 picoseconds could become accessible in the future.

These results demonstrate that, compared to state-of-the-art implementations with transmon qubits, a hundredfold increase in the speed of gate operations with superconducting circuits is still feasible. Despite its long-term relevance, the implementation of quantum gates in the picosecond regime still remains highly unexplored and the ultimate limit for the speed of superconducting quantum processors is still a research question to be addressed with a hope for pushing towards faster gates. In such a scenario, decoherence will be a less limiting factor, since gates will take less time to be applied. Furthermore, algorithms that require a large circuit depth will be able to be implemented, allowing researchers to solve more complex problems. Finally, processes like quantum error correction and decoding of errors will be much easier to implement, therefore taking us beyond the NISQ era.
Published in Blog
Tagged under
Recently, a team from the University of Science and Technology of China (USTC) demonstrated an experiment on a photonic quantum computer that outperformed even the fastest classical supercomputer in a computational task. Such kind of experiments are targeting algorithms and hardware platforms that can provide “quantum supremacy”, which occurs when a quantum computer is outperforming a classical computer.

A photonic quantum computer harnesses particles of light (photons) and consists of a complex array of optical devices, such as light sources, beam splitters, mirrors and photon detectors, that shuttle photons around. In such a computer, the quantum computation is based on a process called Boson Sampling, which is a task deliberately designed to prove quantum supremacy. Boson sampling is trying to understand what the distribution of photons is going to be at the output of a photonic interferometer. In the case of the quantum device implementation of boson sampling, the problem is solved `by itself’ since the distribution of the measured output is the desired photon distribution. In the case of the classical computer, a large computation is required to find the photon distribution, which increases with the size of the problem since the photon’s quantum properties lead to an exponentially increasing number of possible distributions. If operated with large numbers of photons and many channels, the quantum computer will produce a distribution of numbers that is too complex for a classical computer to calculate. In the new experiment, up to 76 photons traversed a network of 100 channels, which is a much larger amount than previously demonstrated, both experimentally and numerically.

This claim for quantum supremacy comes to reinforce what Google presented last year with their superconducting qubit-based quantum computer. The main difference between the two experiments in terms of the result is that the photonics experiment can create many more possible output states: ~1030 of them compared to ~1016. Such a large number makes it infeasible to calculate the whole probability distribution over outputs and store it for future generation of samples (something other researchers suggested as a rebuttal against Google’s claims, but which can certainly not hold in this new experiment).

Although researchers are currently looking for ways to get similar results with classical computers, it has not yet been successful. The main concern around this quantum experiment is the photon loss. It was reported that up to ~70% of the photons get lost on their way through the beam splitter network, allowing only ~30% to be detected. Typically, that amount of photon loss would be considered fatal for quantum supremacy. Furthermore, the classical simulations that are used for comparisons require fixing the rate of noise and then letting the numbers of photons and modes go to infinity. However, any real experiment has a fixed number of photons and modes (in USTC’s case, they’re ~50 and ~100 respectively).

Achieving the goal of quantum supremacy through such kind of experiments does not indicate the definitive, general, superiority of quantum computers over classical computers, since such kind of problems are deliberately designed to be hard for classical computers. On the other hand, it would also be an understatement to say this experiment is `only a proof of principle’, since boson sampling could have actual practical applications, for example solving specialized problems in quantum chemistry and mathematics.

Currently, most proposals in the literature apply boson sampling to vibronic spectra or finding dense subgraphs, but it is not certain whether these proposals will yield real speedups for a task of practical interest that involves estimating specific numbers (as opposed to sampling tasks, where boson sampling almost certainly does yield exponential speedups).

Future research will focus both on algorithm development, exploiting the particular characteristics of such a specialized quantum device, as well as experimental improvements such as decreased photon loss, higher quality sources and detectors, and larger number of modes. The described experiment presents a promising indication of this sub-field of quantum computing, and we keep a close eye on future developments.
Published in Blog
In this paper, researchers from Amazon AWS & IQIM present an architecture for a fault-tolerant quantum computer, that is based on hybrid acoustic-electro devices to implement a stabilized cat code with highly biased noise, dominated by dephasing. To combat these sources of noise, they concatenated the cat code with an outer code that focuses mostly on correcting the dephasing errors, based on the repetition code and the surface code. The assumed error model is critical, since it will affect the fidelities of all required operations (initialization, measurement, gates, etc.) based on which the results are compared to previous works. Therefore, a detailed error analysis of measurements and gates, including CNOT and Toffoli gates is presented according to this realistic noise model.

Fault-tolerant quantum computing requires a universal set of gates, which can be divided into two categories, namely gates that belong to the Clifford group and gates that do not. Clifford gates can be typically achieved easily for a variety of codes, however non-Clifford gates require sophisticated protocols to create and then purify to increase their fidelity, like the magic state preparation/distillation protocol. A novel magic-state distillation protocol for Toffoli states is introduced here (injected via lattice surgery), which in combination with the error correction techniques that were used, result in a lower overhead compared to previous works. Actually, it is estimated that the factory that generates the magic states only accounts for approximately 7% of the total resource overhead requirements, with the other 93% coming from the rotated surface code.

In terms of quantum advantage, the authors find that with around 1,000 superconducting circuit components, one could construct a fault-tolerant quantum computer that can run circuits which are intractable for classical supercomputers.

However, when comparing this work to other related works, one should keep in mind that the assumed gate fidelities and the assumed error model can greatly affect the presented results. The error model in this work assumes Z error rates that are far less optimistic than typically assumed for transmon qubits and due to the cat state encoding there is bit-flip noise suppression that can naturally lead to increased performance. Furthermore, transmon architecture resource estimates are based on a simple depolarizing noise model, whereas this noise model has been derived from first principles modeling of the hardware, making the analysis more realistic.

Moreover, the authors claim to have a comparable or up to 3 times fewer qubits required compared to other superconducting transmon qubit architectures, based on the assumed gate fidelities. Also, similar runtime figures are reported compared to other superconducting transmon qubit architectures, however an important distinction of this protocol is that the magic states are created slightly faster than they can be transported to the main algorithm, whereas in other architectures the main algorithm has to wait for the magic states to be created which is a bottleneck in the runtime.

Although such a protocol shows promise for fault-tolerant quantum computing, the injection of magic states comes with an additional qubit cost for data access and routing. The choice of routing solution leads to a lower bound on runtime execution, so more careful optimization of routing costs and speed of magic state injection is crucial.
Published in Blog
月, 14 8月 2017 12:00

Quantum-dot based spin-qubit processor

Qu&Co comments on this publication:

Quantum dot based spin qubits may offer significant advantages due to their potential for high densities, all-electrical operation, and integration onto an industrial platform. However, in quantum-dots, charge and nuclear spin noise are dominant sources of decoherence and gate errors. Silicon naturally has few nuclear spin isotopes, which can be removed through purification. As a host material, Silicon, enables single-qubit gate fidelities above 99%. In this paper, Watson et al. demonstrate a programmable two-qubit quantum processor in silicon by performing both the Deutsch-Josza and the Grover search algorithms.

Published in Blog
Tagged under

Qu&Co comments on this publication:

So-called holonomic quantum gates based on geometric phases are robust against control-errors. Zanardi and Rasetti, first proposed the adiabatic holonomic quantum computation (AHQC), which has the unavoidable challenge of long run-time needed for adiabatic evolution increasing the vulnerability to decoherence. Therefore non-adiabatic HQC schemes, with much shorter gate-times, were proposed and realized in platforms based on trapped ions, NMR, superconducting circuits and nitrogen-vacancy centers in diamond. In this paper, Zhao et al. propose a non-adiabatic HQC scheme based on Rydberg atoms, which combines robustness to control-errors, short gate times and long coherence times.

Published in Blog
Tagged under

Qu&Co comments on this publication:

In this paper, Puri et al. propose an alternative to the typical quantum annealing architecture with a scalable network of all-to-all connected, two-photon driven Kerr-nonlinear resonators. Each of these resonators encode an Ising spin in a robust degenerate subspace formed by two coherent states of opposite phases. A fully-connected optimization problem is mapped onto local fields driving the resonators, which are themselves connected by local four-body interactions. They describe an adiabatic annealing protocol in this system and analyze its performance in the presence of photon loss. Numerical simulations indicate substantial resilience to this noise channel, making it a promising platform for implementing a large scale quantum Ising machine.

Published in Blog
Tagged under
金, 04 11月 2016 16:45

Scalable optical ising-machine

Qu&Co comments on this publication:

Networks of coupled optical parametric oscillators (OPOs) are an alternative physical system for solving Ising type problems. Theoretical/numerical investigations have shown that in principle quantum effects (like entanglement between delay-coupled pulses) can play meaningful roles in such systems. In this paper, McMahon et al. (and an earlier paper of Inagaki et al.), show that this type of architecture is relatively scalable and can be used to solve max cut problems accurately, although in the current prototype devices the quantum features are 'washed out' by high round-trip losses (typically 10 dB), to the point that a purely semi-classical description of the system is sufficient to explain all the observed experimental results. The next step would be to realize this architecture in a system where the quantum nature is not lost.

Published in Blog
Tagged under
火, 30 1月 2018 16:45

Valleytronic qubits in TMDC material

Qu&Co comments on this publication:

Transition metal dichalcogenide monolayers (TMDC) are atomic-thin two-dimensional materials in which electrostatic quantum dots (QD) can be created. The electrons or holes confined in these QD have not only a spin degree of freedom, but also a valley degree of freedom. This additional degree of freedom can be used to encode a qubit creating a new field of electronics called valleytronics. In this paper Pawlowski et al. show how to create a QD in a MoS2 monolayer material and how to perform the NOT operation on its valley degree of freedom.

Published in Blog
Tagged under
日, 29 10月 2017 16:45

Quantized majorana conductance

Qu&Co comments on this publication:

Majorana bound states are quasi-particles, which obey non-Abelian braiding statistics (meaning they are neither bosons nor fermions). Topological quantum computation uses multiple such quasiparticles to store quantum information, where the non-local encoding provides high fault-tolerance (immunity to local perturbations). Unitary gates can be created by braiding. A semiconductor nanowire coupled to a superconductor can be tuned into a topological superconductor with two Majorana zero-modes localized at the wire ends. Tunneling into a Majorana mode will show a robustly quantized zero-bias peak (ZBP) in the differential conductance. In this paper, Zhang et al. are the first to experimentally show the exact theoretically predicted ZBP quantization, which strongly supports the existence of non-Abelian Majorana zero-modes in their system, paving the way for their next challenge: braiding experiments.

Published in Blog
Tagged under
1 / 2

私たちにどんなお手伝いができるでしょうか?
以下のお問合せフォームからご連絡ください。

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co
close