The physical realization of quantum computers has advanced to a stage when present day quantum processors feature NISQ devices with tens of qubits. Since these devices have different benefits and drawbacks, depending on the device quality and architecture, it's highly advantageous to do a comparative analysis evaluating their performance against defined benchmarks. To date, various structured tasks have been proposed in order to measure the performance of quantum computers. Typical examples include counting the physical qubits (building blocks of digital quantum circuits) implemented in the quantum system, measuring the efficiency in terms of resources (qubits, gates, time, etc.) of preparation of absolute maximally entangled states, volumetric and mirror randomized benchmarking.

One of the first popularized performance metrics proposed (introduced by IBM) is "quantum volume", which is a single-value metric for quantum devices that quantifies how well a quantum system is capable of executing a sizeable random circuit (with circuit depth equal to qubit grid size) with reasonable fidelity. It enables the comparison of hardware with widely different performance characteristics and quantifies the complexity of algorithms that can be run on such a system. Another recent metric that was introduced by Atos is called Q-score, which counts the number of variables in a max-cut problem that a device can optimize.

Along the same lines, the authors in this work, propose a quantum benchmark suite which serves as a comparison tool for the currently available and upcoming quantum computing platforms from the perspective of an end user. The objective is to analyze the performance of the available devices by providing meaningful benchmark scores for a series of different tests. The chosen benchmarks use numerical metrics including uncertainties which can characterize different noise aspects and can allow direct comparison of the performance gains between devices. The authors present six visual benchmarks with structured circuits; Bell Test, complex transformations of the Riemann sphere, Line Drawing, Quantum Matrix Inversion, Platonic and Fractals. All these benchmarks test different aspects of the quantum hardware such as gate fidelity, readout noise, and the ability of the compilers to take full advantage of the underlying device topology, yet in a more holistic approach than the metrics introduced so-far. In this way, the authors hope to offer more information than just 1 single-dimensional meta-parameter, still in a quick glance at a visual representation.

Testing of these benchmarks was performed on currently available quantum devices from Google, IBM and Rigetti using several frameworks such as SDKs and APIs (Qiskit / IBMQ for IBM, Forest / QCS and Amazon Braket for Rigetti, Amazon Braket for IonQ, and cirq for Google).

All the devices receive a numerical score for each of the implemented tests, which can be used in cross evaluating performances. Additionally, the performance of various NISQ devices is analyzed through a series of test scenarios and the proposed metrics. The overall analysis suggests that the proposed benchmarks can be readily implemented on 2-qubit devices with circuit depths < 10 as well as currently available small scale quantum devices. These benchmarks are envisioned to be tested for larger and more complex devices that will be available in the future, therefore exploration of the scalability of such metrics is also investigated.

The scores obtained from the experimental comparisons are then compared to the estimated ideal score based on a finite number of measurements. However, one should keep in mind that these measurements also include statistical errors, due to measurement noise, which is impossible to be eliminated completely. Nevertheless, the error margins presented in this work are shown to have “expected deviation” from the ideal score. This implies that the actual experimental error margins are in agreement with the error score estimates observed in simulated experiments. The authors also find their scores to correlate well with IBM’s Quantum Volume score, although individual cases vary still.

Another crucial factor to be analyzed, are the fluctuations that are observed while simulating experimental data over a period of time. This implies a change in device performance during the simulation time which in turn affects the estimated scores causing a time variance. However, the exact estimation of this variance requires more experimentation. It would be advantageous in future experimentation to explore the temporal inhomogeneities apart from encompassing statistical uncertainty in error margins. Such benchmarks can provide a holistic evaluation including time factor when it comes to comparing different quantum devices.

One potentially major aspect of a quantum performance metric is how widespread its use is. One may have a great metric but if nobody uses it, its usefulness is low. If a metric is used by everyone, but the metric itself is of low significance, the usefulness is equally low. We hope the community can converge on something comparable, fair, and standardized, but it may take some years before that happens in this rapidly fluctuating field.
日, 25 4月 2021 12:00

Quantum Architecture Learning

There has been a large body of work investigating potential advantages of quantum computational algorithms over their classical counterparts, with the ultimate proof being an experimental verification of a "quantum advantage”. Some of the classical problems that are being targeted include factorization of large integers and unstructured database searching. While the advances in both experimental hardware and software are driving the field slowly but steadily towards quantum supremacy, more efforts are still required in both fields. Some of the most promising algorithms for potential near-term quantum advantages include the class of variational quantum algorithms (VQAs). VQAs have been applied to many scientific domains, including molecular dynamical studies, and quantum optimization problems. VQAs are also studied for various quantum machine learning (QML) applications such as regression, classification, generative modeling, deep reinforcement learning, sequence modeling, speech recognition, metric and embedding learning, transfer learning, and federated learning.

In addition to VQAs being applied as quantum implementations of classical machine learning paradigms, conversely VQAs may also themselves benefit from various machine learning paradigms, with one of the most popular being Reinforcement Learning (RL). RL has been utilized to assist in several problems in quantum information processing, such as decoding errors, quantum feedback, and adaptive code design. While so-far such schemes are envisioned with a classical computer as a RL co-processor, implementing “quantum” RL using quantum computers has been shown to make the decision-making process for RL agents quadratically faster than on classical hardware.

In this work, the authors present a new quantum architecture search framework that includes an RL agent interacting with a quantum computer or quantum simulator powered by deep reinforcement learning (DRL). The objective is to investigate the potential of training an RL agent to search for a quantum circuit architecture for generating a desired quantum state.

The proposed framework consists of two major components; a quantum computer or quantum simulator and an RL agent hosted on a classical computer that interacts with the quantum computer or quantum simulator. In each time step, the RL agent chooses an action from the possible set of actions consisting of different quantum operations (one- and two-qubit gates) thereby updating the quantum circuit. After each update, the quantum simulator executes the new circuit and calculates the fidelity to the given target state. The fidelity of the quantum circuit is then evaluated to determine the reward to be sent back to the agent; positive in case the fidelity reaches a pre-defined threshold, else negative.

The authors use the set of single-qubit Pauli measurements as the “states” or observations which the environment returns to the RL agent. The RL agent is then iteratively updated based on this information. The procedure continues until the agent reaches either the desired threshold or the maximum allowed steps. In this work, RL algorithms like A2C and PPO were used to optimize the agent. The results demonstrate that given the same neural network architecture, PPO performs significantly better than the A2C in terms of convergence speed and stability for both the case of noise-free and noisy environments. The result is shown to be consistent with the 2-qubit case.

In this work, the simulation of quantum circuits in both noise-free and noisy environments is implemented via Qiskit software from IBM. The efficiency of such an approach when it comes to large quantum circuits is quite low as the complexity scales exponentially with the number of qubits. Theoretically, a quantum circuit may approximate any quantum state (up to an error tolerance) using a finite number of gates, given a universal set of one and two-qubit quantum gates. Hence, in principle, the RL approach is valid for arbitrary large qubit size but is extremely hard to implement computationally. It would also require thousands of training episodes to verify this kind of experiment on real quantum computers. One can expect significant development in the future when quantum computing resources are more accessible. Finally, another interesting scope will be to investigate the quantum architecture search problem with different target quantum states and different noise configurations.
日, 18 4月 2021 12:00

Training NISQ QNN's

Relatively recently, an increasing interest can be observed in exploring the combination of quantum computational methods and machine learning; on one hand, traditional machine learning tools are used for improving aspects of quantum computational efforts, while on the other hand quantum computational algorithms are designed to enhance parts of machine learning strategies. While provable quantum speedups have been identified for some specific ML tasks, fault tolerant quantum computers (FTQC) are required to execute them in practice, which are not yet available. A growing body of work is now exploring quantum machine learning models implemented as parameterized quantum circuits, whose parameters are variationally optimized via a classical-quantum hybrid feedback loop. Such variational algorithms are expected to fare better even on near-term, noisy intermediate scale quantum (NISQ) computers. Amongst these architectures, quantum neural networks (QNN) are one of the most prominent ones which are used for example to learn unitaries, perform classification tasks, solve differential equations, and decrease the level of noise in quantum data.

As a side note, we would like to point out that in-principle any architecture that combines at a high level the concepts of quantum computing and artificial neural networks can be identified as a quantum neural network. In early developments, QNNs were designed by directly translating each component of a classical neural network to a suitable quantum counterpart. However, this is not always directly feasible, with the most notable example being the non-linear activation function common in classical NNs, whereas regular quantum unitary dynamics is linear. In order to introduce non-linearities, measurement, controlled decoherence or circuitry-feedback is required. With such processes one may construct a ‘quantum perceptron’. Other architectures considered to fall within the QNN category include the quantum Boltzmann machines and variationally parametrized circuits like the Quantum Approximate Optimization Algorithm (QAOA). More recently, kernel methods and nonlinear quantum feature maps are now seen as interesting alternatives to the perceptron-style QNNs. Whether any of the example architectures should or should not be called QNN is semantically interesting, but at the end of the day it is more important whether they can solve ML tasks well.

Despite their many advantages, QNN architectures still face many limitations on NISQ devices. One such limitation that is commonly encountered, is the presence of Barren plateaus when exploiting gradient-based training methods which prohibits the algorithm from finding the path towards the energy minimum due to the landscape becoming flat during the training. Also, the high noise levels in higher-depth quantum circuits limit the computational accuracy of the costs and gradients.

In this work, the authors present a comparative analysis of two `QNN architectures`, namely the Dissipative Quantum Neural Network (DQNN), whose building-block (a ‘perceptron’) is a completely positive map, and the QAOA algorithm, which are both implemented on IBM’s NISQ devices via Qiskit. The objective is to evaluate the performance of both methods while implementing certain tasks such as learning an unknown unitary operator.

In the case of DQNN, perceptron maps act on layers of different qubits, whereas the QAOA defines them as a sequence of operations on the same qubits. These networks are implemented using 6 and 4 qubits for DQNN and QAOA respectively, including the initialization and measurement. The training of the networks was executed in a hybrid manner. At each epoch, the cost was evaluated by the quantum execution, which was then used to update the parameters classically. In an ideal (noise-free) case, the training cost should always be monotonously increasing for the chosen parameters. In this work, DQNN is shown to reach higher validation costs as compared to the QAOA. Another contrasting observation is that the validation costs increase with the number of training pairs in the case of DQNN while QAOA’s validation cost is approximately uniformly distributed around the mean. The results show that both networks are capable of generalizing the available information despite the high noise levels. However, the generalization capability of DQNN is more reliable than QAOA.

The authors further evaluate and compare the noise tolerance of both of these methods. Out of the two primary sources of noise; the readout noise influences both of these networks in a similar manner. However, in the presence of gate noise, DQNN is observed to have a higher identity cost resulting in higher training and validation cost as compared to QAOA. This implies that DQNN is less susceptible to gate noise in comparison.

Overall, the work demonstrates that, although both architectures have high noise tolerance, DQNN has more potential in terms of reliability, accuracy and lesser susceptibility to noise sources as compared to QAOA when implemented on the current NISQ devices. Improving the performance of DQNN strongly correlates with the improvement of quantum hardware. As quantum hardware becomes more reliable in the near future, by lowering the levels of noise and reducing the need for high amounts of qubits due to resettable qubits, DQNN with multiple layers can be used. Such a DQNN can potentially explore problems involving higher-dimensional unitaries and non-unitary maps.
Quantum circuit complexity is one of the most crucial factors to take into consideration in quantum computation. It provides an account of computation with a minimal number of steps and time needed in order to implement a given unitary. One can associate quantum circuit complexity with the complexity involved in the preparation of a given quantum fiducial state from its initial state. For instance, a quantum state generated by a quantum chaotic Hamiltonian evolution will be highly complex, if the quantum circuit preparing it requires a lot of time on a quantum computer. When determining the overall complexity of a quantum circuit, an essential factor that needs to be accounted for, is the cost of such circuit while keeping into consideration the circuit design.

While the above factors can contribute to analyzing the complexity of quantum circuits, it is not trivial to calculate these factors quantitatively. Although there are (classical) algorithmic procedures that can find a decomposition of a unitary into a quantum circuit including Clifford and T-gates, which can be done in exponential run-time in circuit size, computing the complexity still requires optimization over these decompositions. During such decompositions, cancellations of gates occur that make the impact of a unitary gate to be partially compensated by the application of a following similar gate. One potential correlation to explore is whether the entanglement (created by quantum gates) has any impact on circuit complexity or at the cost of a unitary.

The author in this work analyzes such a relationship when both the entanglement of a state and the cost of a unitary take small values, based on how values of entangling power of quantum gates add up. It provides a simple lower bound for the cost of a quantum circuit that is tight for small values of the cost. The idea of these bounds comes from the entanglement capabilities of the quantum gates. Quantum gates that are close to the identity in operator norm have little capability to create entanglement from product states. It is however theorized that their contribution of entanglement to a given entangled state is also very little. The bound presented in this work implies that, assuming linear growth of entanglement entropies with time, cost also increases linearly.

Furthermore, for Gaussian continuous-variable systems, there is a small incremental entanglement bound as well as a harmonic equivalent of the above relationship between entanglement and quantum circuit cost. This bound can also be applied both to spin systems and Gaussian bosonic continuous-variable settings, which are important in approximating non-interacting bosonic theories. A noteworthy observation is that when a quantum many-body system undergoes non-equilibrium dynamics leading to a linear increase in the entanglement entropy, the quantum state complexity also increases.

An important result from the presented bounds, is that one can derive the required depth of a quantum circuit that can produce a given entanglement pattern in a desired final state, for either pure or mixed states. The presented bounds can also help in assessing the computational performance of analog quantum simulators and their direct comparison to their digital counterparts. One can argue that for a precisely defined computational effort, both analog and digital quantum simulators can achieve similar results. These simple bounds can provide a useful and versatile tool in various studies of such comparisons and relations.
水, 31 3月 2021 12:00

Modular photonic quantum computing

Rapid progress has been made in recent years in the development of fault-tolerant quantum computing (FTQC) - across theoretical foundations, architectural design and experimental implementations. Most of the proposed architectures are based on an array of static qubits, where relevant large-scale computation with for example superconducting qubits is expected to require vast numbers of physical qubits taking up a lot of space and control machinery. Directly translating that paradigm to photonic FTQC architectures, implies that photons serve as the ‘static qubits’ implementing gates and measurements. However, the implementation of long sequences required by FTQC protocols, becomes difficult to process as photons are short-lived, easily lost, and destroyed after measurement. This makes conventional FTQC description not suitable to photonic quantum computing.

Fusion-based quantum computing (FBQC) is an alternative to standard photon-based FTQC architectures that can overcome such limitations. In FBQC, quantum information is not stored in a static array of qubits, but periodically teleported from previously generated resource states to currently generated photons. Hence, even when the measured photons are destroyed, their quantum information is preserved and teleported accordingly. In this work, the authors present a modular and scalable architecture for FBQC, which can provide the computational power of thousands of physical qubits. The unit module of the architecture consists of a single resource-state generator (RSG), a few fusion devices, and macroscopic fiber delays with low transmission loss rates connected via waveguides and switches. The networks of such modules execute the operations by adding thousands of physical qubits to the computational Hilbert space for executing computation. The authors argue that, pragmatically, “a static qubit-based device and a dynamic RSG-based device (can be considered) equally powerful, if they can execute the same quantum computation in the same amount of time”. A single RSG is shown to be much more ‘powerful’ than a single physical qubit.

The qubits produced by RSGs are encoded as photonic qubits and are combined using a stabilizer code such as a Shor code. The photonic qubits are then transported by waveguides to n-delays, which delay (store) photons for n RSG cycles therefore acting as a fixed-time quantum memory by temporarily storing photonic qubits. This photonic memory is used to increase the number of simultaneously existing resource states available in the total computation space. Fusion devices further perform entangling fusion measurements of photon pairs that enter the device. Finally, switches reroute the incoming photonic qubits to one of the multiple outgoing waveguides. Switch settings can be adjusted in every RSG cycle, thereby deciding the operations to be performed.

In contrast to circuits in circuit-based quantum computation (CBQC), photonic FBQC uses fusion graphs to describe the execution of a specific computation. The authors review the structure of simple cubic fusion graphs using 6-ring graph states which is a six-qubit ring-shaped cluster state as resource states. Each resource state is fused with six other resource states allowing one fusion per constituent qubit of the resource state. Another direction being explored is interleaving, which involves allocating the same RSG to successively produce different fusion-graph resource states. Exploiting different arrangements of RSGs and using longer delay lines can lead to larger fusion graphs. Furthermore, it is demonstrated that interleaving modules with n-delays can increase the number of available qubits by n, but inevitably decrease the speed of logical operations by the same factor. To avoid that, it is recommended increasing the number of interleaving modules and investigating different arrangements.

These photonics-based FBQC architectures are not only modular and highly scalable, but are cost-efficient as well, since they reduce the cost of logical operations. Combining this with the interleaving approach further improves feasibility, where instead of million-qubit arrays of physical qubits, arrays of disconnected few-qubit devices can be turned into a large-scale quantum computer, provided that their qubits are photonic qubits. Such hybrid architecture repeatedly generates identical few qubit resource states from matter-based qubits, connecting them to a large-scale fault-tolerant quantum computer. Moreover, it also handles the classical processing associated with error correction along with providing high-capacity memory. As quantum technology evolves, larger number of high-quality qubits are going to be available, allowing a transition from small-scale FTQC devices to fully scalable devices. These early FTQC devices are expected to be similar in design to the current NISQ devices albeit much more powerful. Utilizing such approaches in photonic FBQC along with the developments in highly efficient photonic hardware can make the transition to large-scale fault-tolerant quantum computers a reality in the near future.
With the advent of more powerful classical computational power, machine learning and artificial intelligence research has made a recent resurgence in popularity and massive progress has been made in recent years in developing useful algorithms for practical applications. Meanwhile, quantum computing research has advanced to a stage where quantum supremacy has been shown experimentally, and theoretical algorithmic advantages in, for instance, machine learning have been theoretically proven. One particularly interesting machine learning paradigm is Reinforcement Learning (RL), where agents directly interact with an environment and learn by feedback exchanges. In recent years, RL has been utilized to assist in several problems in quantum information processing, such as decoding of errors, quantum feedback and adaptive code-design with significant success. Turned around, implementing ‘quantum’ RL using quantum computers has been shown to make the decision making process for RL agents quadratically faster than on classical hardware.

In most protocols so far, the interaction between the agent and the environment has been designed to occur entirely via classical communication in most RL applications. However, there is a theoretically suggested possibility of gaining increased quantum speedup, if this interaction can be transferred via quantum route. In this work, the authors propose a hybrid RL protocol that enables both quantum as well as classical information transfer between the agent and the environment. The main objective is to evaluate the comparative impact of this hybrid model on agent’s learning time with respect to RL schemes based on solely classical communication. The work uses a fully programmable nanophotonic processor interfaced with photons for the experimental implementation of the protocol. The setup implements an active feedback mechanism combining quantum amplitude amplification with a classical control mechanism that updates its learning policy.

The setup consists of a single-photon source pumped by laser light leading to the generation of a pair of single photons. One of these photons is sent to a quantum processor to perform a particular computation, while the other one is sent to a single-photon detector for heralding. Highly efficient detectors with short dead time response serve as fast feedback. Both detection events at the processor output and photon detector are recorded and registered with a time tagging module (TTM) as coincidence events. The agent and the environment are assigned different areas of the processor, performing the prior steps of the Grover-like amplitude amplification. The agent is further equipped with a classical control mechanism that updates its learning policy.

Any typical Grover-like algorithm faces a drop in the amplitude amplification after reaching the optimal point. Each agent reaches this optimal point at different epochs, therefore one can identify the probability up to which it is beneficial for all agents to use a quantum strategy over the classical strategy. The average number of interactions until the agent accomplishes a specific task is the learning time. The setup allows the agents to choose the most favorable strategy by switching from quantum to classical as soon as the second becomes more advantageous. Such combined strategy is shown to outperform the pure classical scenario.

Such a hybrid model represents a potentially interesting advantage over previously implemented protocols which are purely quantum or classical. Photonic architectures in particular are put forward by the authors to be one of the most suitable candidates for implementing these types of learning algorithms, by providing advantages of compactness, full tunability and low-loss communication which easily implements active feedback mechanisms for RL algorithms even over long distances. However, the theoretical implementation of such protocols is general and shown to be applicable to any quantum computational platform. Their results also demonstrate the feasibility of integrating quantum mechanical RL speed-ups in future complex quantum networks.

Finally, through the advancement of integrated optics towards the fabrication of increasingly large devices, such demonstration could be extended to more complex quantum circuits allowing for processing of high-dimensional states. This raises hopes for achieving superior performance in increasingly complex learning devices. Undoubtedly in the near future, AI and RL will play an important role in future large-scale quantum communication networks, including a potential quantum internet.
Quantum computers possess unique potential to outperform their classical counterparts with algorithms based on real-time evolution. Given the intrinsically quantum-mechanical relation between the time and energy domains, more focus is on quantum algorithms which focus on using a time-dependent perspective to solve time-independent problems. Hence, the simulation of the time-dependent Schrödinger equation is a more ideal framework to implement.

Presently, there are plenty of quantum algorithms which are based on solving the time-independent Schrödinger equation to determine the Hamiltonian eigenvalues and eigenstates. The majority of these algorithms are limited classically by the exponential scaling of Hilbert space with the system size and require considerable quantum resources to run. So far, methods such as Approximate Imaginary Time Evolution and Krylov diagonalization are more widely used in classical simulation of static phenomena than real-time evolution as the latter has computational limitations. There also exists practical limitations in getting closer to the ground state during the evolution. However, the states generated through real time evolution can provide a basis from which one can extract ground and excited states. In some cases, this method may be faster than other quantum methods that use time evolution as a subroutine for goals other than dynamical behaviour, such as using QPE for spectral analysis.

The authors in this work propose analyze variational quantum phase estimation (VQPE), a method based on real-time evolution for computing ground and excited states, using states generated by real-time evolution. The work consists of theoretical derivations using this method to solve strongly correlated Hamiltonians. The proposed VQPE method has a set of equations that specifies conditions for time evolution with a simple geometrical interpretation. These conditions decouple eigenstates out of the set of time evolved expansion states and connect the method to the classical filter diagonalization algorithm. Furthermore, the authors introduce a unitary formulation of VQPE that allows for a direct comparison to iterative phase estimation. In this formulation, the number of matrix elements that need to be measured scales linearly instead of quadratic with the number of expansion states thereby reducing the number of quantum measurements.

The authors also provide an analysis of the effects of noise on the convergence properties showing that simple regularization techniques suffice to mitigate the effects of noise. Also, the VQPE approach was demonstrated on a wide range of molecules of different complexities, simulating the algorithm classically, as well as the transverse field Ising model on IBM’s quantum simulator and hardware. For several weakly and moderately correlated molecules as well as strongly correlated transition metal dimer Cr2, the chemical accuracy for ground state energies is attained in less than ~50 real-time timesteps. This is comparatively faster than ~106 timesteps required by the state-of-the-art classical methods with orders of magnitude fewer variational parameters.

The results show VQPE as a natural and efficient quantum algorithm for ground and excited state calculations of general many-body systems. On one hand, QPE utilizes its deeper circuits to achieve Heisenberg-limited energy resolution which is comparatively more efficient for achieving high accuracy in overall run time. On the other hand, for the same number of time steps per circuit, VQPE is able to achieve higher accuracy than idealized QPE. It can be concluded that VQPE has near-term advantages as compared to long-term benefits of QPE. This makes it a better candidate for utilizing near-term hardware with shorter circuit depths and fewer available qubits being used as ancillae. By choosing an optimal time step size to generate linearly independent expansion states with each new time evolution, the variational Ansatz can be made compact which sets a lower bound to the time step size. This also minimizes the total simulation time as required by NISQ hardware. This compactness, together with their NISQ compatibility makes VQPE approaches as some of the most promising platforms to perform quantum simulations for many body systems beyond the reach of classical computations. Since real-time evolution is natural to implement on quantum hardware, this approach holds immense promise for NISQ implementation.
The “Analog quantum simulation” paradigm of quantum computing aims to develop simpler models of a complex quantum system while reproducing all the physical attributes of the system in the operational domain of interest, such as its spectrum or phase diagram. The main idea is to simulate a rather complex target Hamiltonian H using a simpler Hamiltonian H’ that can be more easily implemented on practical analog quantum-computational hardware. One of the advantages of analog quantum simulation is the expected lesser requirement of quantum error correction or precise controls. Hence, it is considered to be an important practical direction in the era of NISQ technology.

The concept of universality when seeking analog simulators is based on the existence of a Hamiltonian H’ in the family that can be used to simulate any local Hamiltonian H. General universal models such as spin-lattice model Hamiltonians can potentially be inefficient to simulate directly as they for example require interaction energy that scales exponentially with system size. This exponential scaling holds true if the original Hamiltonian has higher-dimensional, long-range, or even all-to-all interactions. In this work, the authors provide an efficient construction of these strongly universal families in which the required interaction energy and all other resources in the 2D simulator scale polynomially instead of exponentially. The scaling occurs in the size of the target Hamiltonian and precision parameters and is independent of the target’s connectivity. The work involves the conversion of the target Hamiltonian to a quantum phase estimation circuit embedded in 1D. This circuit is then mapped back to a low-degree simulating Hamiltonian, using the Feynman-Kitaev circuit to-Hamiltonian construction.

The authors extend this method to simulate any target Hamiltonian with a 1D or 2D Hamiltonian using some of the existing techniques in the literature. Combinations of techniques such as the quantum phase estimation algorithm and circuit-to-Hamiltonian transformations were used in a non-perturbative way, which allows to overcome the exponential overhead common to previous constructions. The results show that only polynomial overheads in both particle number and interaction energy are sufficient to simulate any local Hamiltonian with arbitrary connectivity by some universal Hamiltonians embedded in 1D or 2D.

This work establishes the possibility of efficient universal analog quantum simulation using simple 1D or 2D systems, which we know can be built in practice with good control. The required constructions known so-far have been far from optimal. For example, existing hardware has limited types of interactions available, so in order to consider also general interactions, these need to be simulated using those single type of interactions together with ancilla qubits placed in more than one dimension. Polynomial-sized encoding and decoding circuits can be used to simulate 1D analog Hamiltonians which can be explored further towards achieving strong universality. In this work, it is shown that strongly universal analog quantum simulation is possible and can efficiently simulate any target Hamiltonian using 1D and 2D universal systems using polynomial qubits and interaction energies, which they show is tight since it is impossible to lower interaction energy to constant. However, the encoding circuits inducing non-local correlations can affect the desirable properties of analog Hamiltonian simulations such as preservation of locality of observables, as well as considerations of noise. As an alternative approach, the translation-invariance can be relaxed by letting Hamiltonian interactions have more free parameters to encode the target Hamiltonian.

One interesting takeaway from this research is that analog quantum simulation is actually relevant for many more systems than previously thought, and digital gate-based quantum simulation may not always be the best way to go in the described cases. Further experimental realizations of analog quantum simulators are required to develop methods to simulate all physical systems and tackle classically intractable problems in a practical and efficient way.
金, 05 3月 2021 12:00

Nanophotonic Quantum Computing

In recent years, there have been a number of promising breakthroughs in engineering quantum processors, for example in ion-trap systems and superconducting systems. Physical platforms include programmable machines that can deliver automation, stability, and repeatability to implement quantum algorithms. These machines can now be remotely accessed and loaded with algorithms written in high-level programming languages by users having no in-depth knowledge of the low-level quantum hardware details of the apparatus. These capabilities have rapidly accelerated research that targets application development for near-term quantum computers.

One major limitation of present-day systems is the high level of noise affecting the qubits. Noise severely restricts the efficient application of quantum algorithms even if the algorithms are in-principle compatible with large scale implementations. Some of these algorithms are efficiently encoded in binary modes called qubits, while others are more efficiently expressed in a model in which each independent quantum system is described by a state in an infinite-dimensional Hilbert space. Typical applications that include those are the ones implementing bosonic error correction codes and gaussian boson sampling applications. Physical platforms offered by photonic hardware possess great potential to explore the large-scale physical implementation of such quantum algorithms. An ideal system should be dynamically programmable and readily scalable to hundreds of modes and photons. Also, it should be able to access a class of quantum circuits that are exceedingly harder to be efficiently simulated by classical hardware when the system size is increased. Presently, no such system is known that can achieve all these simultaneously. On one hand, a large photonic cluster state has been demonstrated, with the caveat of being limited to all-Gaussian states, gates, and measurements. On the other, single-photon-based experiments on integrated platforms suffer from non-deterministic state preparation and gate implementation which hinders their scalability.

The authors of this paper offer a full-stack solution consisting of hardware-software co-design based on a programmable nanophotonic chip that includes the capabilities of an ideal system in a single scalable and unified machine. The work involves both experimental research and theoretical modeling of the proposed hardware in order to enhance the required capabilities; programmability, high sampling rate, and photon number resolving. The programmable chip operates at room temperature and is interfaced with an automated control system for executing many-photon quantum circuits. The procedure involves the initial state preparation, gate sequence implementation, and readout followed by verifying the non-classicality of the device’s output. The photonic chip was used to generate squeezed states coupled with a custom modulated pump laser source with active locking system for the on-chip squeezer resonators. Further digital-to-analog converters were used for tuning the asymmetric Mach-Zehnder interferometer filters and programming the four-mode interferometer. This is coupled with a real-time data acquisition system for detector readout. Finally, a master controller (conventional server computer) is used to run a custom-developed control software coordinating the operation of all the hardware. By means of strong squeezing and high sampling rates, multi-photon detection events with photon numbers and rates are observed to be improved, exceeding previous quantum optical demonstrations.

The authors used the platform to carry out proof-of-principle implementations such as Gaussian boson sampling (GBS), resolving molecular vibronic spectra, and solving graph similarity problems which use samples from the device to infer a property of the object central to the application. For GBS, the samples provide information about the nonclassical probability distribution produced by the device. The vibronic spectra algorithm uses outputs from the device to obtain molecular properties, while for graph similarity, the samples reveal information on graph properties. In all demonstrations, the device is programmed remotely using the Strawberry Fields Python library. The authors also theoretically predicted a more detailed model of the device involving two Schmidt modes per squeezer, non-uniform loss before the unitary transformation, and excess noise. Such noise-modelling is still relatively rare in the nanophotonic quantum computing community and their additions to the field are potentially valuable for the understanding of algorithmic performance of such systems as compared to very different hardware implementation.

The proposed device marks a significant advance in scaling such nanophotonic chips to a larger number of modes. One of the greatest challenges in scaling to a system of this size is maintaining acceptably low losses in the interferometer. With precise chip fabrication tools, new designs for integrated beam splitters and phase shifters, one can achieve an order-of-magnitude improvement in the loss per layer in the interferometer. Furthermore, the inclusion of tunable single-mode (degenerate) squeezing and displacement can add a significant upgrade, permitting the generation of arbitrary Gaussian states. Such scaling and upgrades constitute the next steps for near-term photonic quantum information processing demonstrations. Combined with rapid advancements in photonic chip fabrication, such demonstrations coincide with new optimism towards photonics as a platform for advancing the frontier of quantum computation.
In recent years, Noisy Intermediate-Scale Quantum (NISQ) systems are broadly studied, with a particular focus on investigating how near-term devices could outperform classical computers for practical applications. A major roadblock to obtaining a relevant quantum advantage is the inevitable presence of noise in these systems. Therefore, a major focus point of NISQ research is the exploration of noise in currently-available and realistic devices and how the effects of such noise can be mitigated. A growing body of work in this direction proposes various error correcting and error mitigating protocols with an objective to limit this unwanted noise and possibly achieve error suppression. As NISQ devices cannot support full error correction, analysis of the noise and finding ways to suppress it, will increase the chances of obtaining tangible benefits by NISQ computation. In this edition of Active Quantum Research Areas, we cover several recent and promising papers in this direction.

While techniques like Dynamical Decoupling have potential to partially suppress quantum errors, their effectiveness is still limited by errors that occur at unstructured times during a circuit. Furthermore, other commonly encountered noise mechanisms such as cross-talk and imperfectly calibrated control pulses can also decrease circuit execution fidelity. Recent work by [1] discusses an error mitigation strategy named `quantum measurement emulation' (QME), which is a feed-forward control technique for mitigating coherent errors. This technique employs stochastically applied single-qubit gates to ‘emulate’ quantum measurement along the appropriate axis, while simultaneously making this process less sensitive to coherent errors. Moreover, it uses the stabilizer code formalism in order to enable error suppression leading to improved circuit execution fidelity observed in this work. Since QME does not require the computation of correction gates as needed in randomized compiling, it can only protect against errors that rotate the qubit out of the logical codespace. This technique also seems to be effective against coherent errors occurring during twirling gates. For arbitrarily generated circuits, QME can outperform simple dynamical decoupling schemes by addressing discrete coherent errors. Moreover, it does not require costly measurements and feedback is cost-effective as well.

Apart from passively mitigating errors, another approach to rectify these errors will be effective active error-suppressing techniques. Out of the recently introduced methods, virtual distillation is capable of exponentially suppressing errors by preparing multiple noisy copies of a state and virtually distilling a more purified version. Although this technique requires additional (ancilla) qubits, qubit efficiency can be achieved by resetting and reusing qubits. One such method is proposed by [2] named Resource-Efficient Quantum Error Suppression Technique (REQUEST) which is an alternative to virtual distillation methods. For N qubit states, the total qubit requirement of REQUEST is 2N + 1 for any number of copies instead of MN + 1 qubits to use M copies as required by past approaches. The optimal number of these copies is then estimated by using near-Clifford circuits by comparing results mitigated with different values of M to exact quantities. It has been observed that with increasing the optimal number of copies, error suppression will also increase; perhaps exponentially. This suggests that the method can be relevant for larger devices where sufficient qubits and connectivity are available. However, one of the drawbacks of the method is the increase in the overall depth of the quantum circuit in order to achieve a reduction in qubit resources, so further research on this trade-off would be interesting.

Another recent work concerning error suppression is presented in [3]. This work proposes a technique to exponentially suppress bit or phase-flip errors with repetitive error correction. In this work, the authors implement 1D repetition codes embedded in a 2D grid of superconducting qubits. This technique requires the errors to be local and the performance needs to be maintained over many rounds of error correction - two major outstanding experimental challenges. The results demonstrate reduced logical error per round in the repetition code by more than 100× when increasing the number of qubits from 5 to 21. This exponential suppression of bit or phase-flip errors is shown to be stable over 50 rounds of error correction. Also, it was observed that a stable percentage of detection events was observed throughout the 50 rounds of error correction for the system with 21 superconducting qubits, which is important for showing the value of error correction. The authors also perform error detection using a small 2D surface code. Both experimentally implemented 1D and 2D codes agree with numerical simulations considering a simple depolarizing error model, which supports that superconducting qubits may be on a viable path towards fault-tolerant quantum computing. It would be interesting to compare the performance on other types of hardware also.

One of the potential benefits and long-term goals of error correction is attaining scalable quantum computing. However, logical error rates will only decrease with system size while using error correction when physical errors are sufficiently uncorrelated. One limiting factor in terms of scalability is the creation of leakage states, which are non-computational states created due to the excitation of unused high energy levels of the qubits during computation. Particularly for superconducting transmon qubits, this leakage mechanism opens a path to errors that are correlated in space and time. To overcome this, the authors of [4] propose a reset protocol that returns a qubit to the ground state from all relevant higher-level states. It employs a multi-level reset gate using an adiabatic swap operation between the qubit and the readout resonator combined with a fast return. The authors claim a fidelity of over 99% for qubits starting in any of the first three excited states while the gate error is predicted by an intuitive semi-classical model. During the experimentation, only currently existing hardware was used for normal operation and readout. Since there was no involvement of strong microwave drivers which might induce crosstalk, this reset protocol can be implementable on large scale systems. The performance of the protocol is tested with the bit-flip stabilizer code, investigating the accumulation and dynamics of leakage during error correction. The study reveals that applying reset reduces the magnitude of correlations leading to lower rates of logical errors and improved scaling and stability of error suppression as the number of qubits is increased. Therefore, optimizing gates and readout to have minimal leakage is a necessary strategy and the correlated nature of the leakage error makes reset protocols critical for quantum error correction.

Error correction and error mitigation strategies are both valid paradigms which will be required on the road to useful quantum computing. Current NISQ devices, however, cannot support full error correction for deep and wide enough circuits to be useful, therefore more attention has been given to error mitigation strategies that attempt to suppress any type of noise as much as possible. At the moment research is focused in the reduction of noise in gate, initialization and measurement operations, in order to have more reliable information about the state of the qubits during computation. Noise processes like leakage to non-computational states and crosstalk between neighbouring qubits are deemed as extremely important, which led to the proposal of active reset and other qubit control techniques. Experiments with small devices consisting of up to 20 qubits have been performed, in order to: a) show the advantages of error correction in combating leakage and crosstalk in the setting of repeated stabilizer measurements and b) show the advantages of error mitigation through techniques like reset and re-use of qubits and conversion of coherent errors into incoherent. Nevertheless, it is clear that achieving exponential noise suppression in large systems of relevant size is far from straightforward and will require advanced error correction and error mitigation techniques, even though there are indications through experiments with small systems that the aforementioned techniques will provide high-level suppression. As mentioned in [3], there are experimental results on both 1D and 2D codes that show evidence of being within a striking distance of noise suppression (as defined by the surface code threshold).
It will be particularly interesting to see in future research whether a fixed number of physical qubits should best be used for error correction or combined with error mitigation techniques.

References:
[1] Greene et. al., “Error mitigation via stabilizer measurement emulation” arXiv:2102.05767 (Feb. 2021)
[2] P. Czarnik et. al. “Qubit-efficient exponential suppression of errors” arXiv:2102.06056v1 (Feb. 2021)
[3] Z. Chen et. al. “Exponential suppression of bit or phase flip errors with repetitive error correction” arXiv:2102.06132v1 (Feb 2021)
[4] M. McEwen et. al. “Removing leakage-induced correlated errors in superconducting quantum error correction” arXiv:2102.06131v1 (Feb 2021)
Optimizing quantum algorithms on near-term noisy-intermediate scale quantum (NISQ) devices is an essential requirement to demonstrate the quantum advantage over the existing classical computing. The capabilities of these devices are constrained by high noise levels and limited error mitigation. Combinatorial optimization on quantum processors is one such promising route to solve the problem created by noise in these systems. Out of various existing approaches for optimization, the most notable ones are Quantum Approximate Optimization Algorithm (QAOA) and variational quantum algorithms, especially for eigenvalue problems with high complexity.

The authors in this work propose an iterative “Layer VQE (L-VQE)” approach, inspired by the well-known Variational Quantum Eigensolver (VQE). The work conducts numerical studies, simulating circuits with up to 40 qubits and 352 parameters (which is a hard problem to simulate) using matrix product state (MPS) representation to perform large-scale simulations of the quantum circuits. The performance of L-VQE in this work is simulated using a noisy simulator of a trapped-ion quantum computer.

It has been proven in literature that for a graph with n vertices, solving the k-communities modularity maximization problem requires kn qubits which encode the problem using the well-known Ising model Hamiltonian. The authors of this paper propose a novel formulation that requires only n log(k) qubits. They further compare the performance of L-VQE with Quantum Approximate Optimization Algorithm (QAOA), which is widely considered to be a strong candidate for quantum advantage in applications with NISQ computers. However, the many-body terms in the Hamiltonian make it harder to implement in the QAOA setting. The numerical results suggest that QAOA comparatively achieves lower approximation ratios and requires significantly deeper circuits. This gap can be compensated using the L-VQE approach for NISQ devices.

The proposed L-VQE algorithm in this work starts from a relatively simple and shallow hardware efficient ansatz with a small number of parameterized gates and then adds layers to the ansatz systematically. This differs from most VQE approaches which have an ansatz fixed upfront. The work claims that this approach can make the ansatz more expressive along with reducing the optimization overhead. Furthermore, the numerical results suggest that adding layers of the ansatz increases the probability of finding the ground state or finding the state that is sufficiently close to the ground state. Such an approach is deemed simple enough to be generalized to different quantum architectures. It is numerically shown that the standard VQE is more likely to fail in the presence of sampling noise as compared to L-VQE which is shown to be more robust under sampling noise.

Lastly, further numerical studies in this work claim that the ansatz with entanglement performs better than the ansatz without entanglement, which is intuitive because entanglement is an important resource to quantum computing; even when the target optimal state is a product state, optimization to it may be guided better with an entangling ansatz. These results provide insight into the introduction of additional entangling parameters in VQE for classical problems. It is proposed to break down the barriers in the optimization landscape, making it more convex and therefore more amenable to simple local outer-loop optimizers to find a minimum. This contrasts with the previous results where no beneficial effects of entanglement are observed. This difference in results suggests the importance of the parameterization choice and the overall VQE procedure design contributing to the success of such methods.
For the Noisy Intermediate Scale Quantum (NISQ) era, in the absence of large-scale quantum error correction, the number of gates that can be applied while maintaining computational coherence is at present strongly limited by hardware noise and decoherence. In an attempt to alleviate some of the detrimental effects, current generations of quantum algorithms often rely on a hybrid classical-quantum approach. Such approaches consider a trial quantum state (ansatz state) with a tractable number of parameters and relatively short circuit depth. These parameters are then optimized in order to approximate a target state as accurately as possible. In most of such applications shown hitherto, the target state was variationally optimized to represent a lowest-energy eigenstate (groundstate) of some quantum Hamiltonian.

However, one can also envision simulating unitary time evolution (or ‘dynamics’) with such variational algorithms. The authors of today’s paper first reference the Time-Dependent Variational Algorithm (TDVA) which encodes the state into a variational circuit and iteratively updates the parameters by solving the corresponding equation of motion. However, a significant drawback of that existing algorithm is that it suffers from an expensive quadratic cost in the total number of variational parameters.

To tackle this problem, the authors in this work introduce a novel hybrid algorithm to simulate the real-time evolution of quantum systems using parameterized quantum circuits. They propose a new method called "projected - Variational Quantum Dynamics" (p-VQD) which realizes an iterative, global projection of the exact time evolution onto the parameterized manifold. In the small time-step limit, this approach is equivalent to the McLachlan’s variational principle and uses it to optimize all parameters at once. This algorithm is shown to overcome the drawbacks of existing approaches as it is both global – it optimizes all parameters at once – and efficient – it scales linearly with the number of parameters. Moreover, it does not require auxiliary (ancilla) qubits and the depth of the circuit is constant throughout all the simulation. They use circuit differentiation to compute gradients analytically and use them for gradient descent optimization.

This global approach potentially extends the scope of existing efficient variational methods, that instead typically rely on the iterative optimization of a restricted subset of variational parameters. The authors claim that this approach can be particularly advantageous over existing global optimization algorithms based on the time-dependent variational principle.

They have shown that the algorithm is asymptotically more hardware efficient than the standard variational algorithm while retaining a higher accuracy. Currently a drawback of this method is that the circuit constructed on the quantum device is approximately twice as deep as the ansatz used to represent the system. However, by suitably controlling the number of two-qubit gates in the particular ansatz of choice, the authors comment that p-VQD is already implementable to simulate small quantum systems on available devices.

One possible application of the approach used in this work is to study the dynamical properties of two-dimensional interacting systems which is a notoriously difficult problem for classical computation. Similar to all other variational algorithms, the choice of the right parametrization is fundamental for the algorithm to succeed. In this sense, having an efficient quantum algorithm to perform variational time evolution is essential to compare to classical results obtained with variational states based on tensor or neural networks.
1 / 9

私たちにどんなお手伝いができるでしょうか?
以下のお問合せフォームからご連絡ください。

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co
close