A field of quantum information that has gained a lot of attention recently is Quantum Machine Learning (QML). One of the reasons that QML is considered an exciting direction is that quantum information processing promises scaling advantages over classical methods which means that usual machine learning tasks may enjoy an improved efficiency when they are carried out on a quantum computer. In early works, the primary focus of research in Quantum Machine Learning was on speeding up linear algebra subroutines, although it was later proved that they exhibit an inverse polynomial scaling of the runtime in the error, and practical use-cases are limited because of the large pre-factor overhead of FTQC requirements. An alternative approach could be using a quantum device to define and implement the function class and further optimizing on a classical computer. This can be achieved by employing Quantum Neural Networks (QNNs) or parameterized quantum circuits, however the optimization of QNNs is challenging because of the non-convex nature of the optimization landscape, and because of gradient based optimization inefficiencies known as Barren Plateaus. An approach related to the QNN that allows for convex optimization is known as a quantum kernel, which uses a predefined way of encoding the data in the quantum system and defines a quantum kernel as the inner product of two quantum states.

It is generally accepted that any given algorithm cannot efficiently solve many different types of problems, even if it can efficiently solve some of them. Likewise, a standard inductive bias in ML will suffer to learn discontinuous functions as it primarily prefers continuous functions. In order for QML models to outperform classical ML models, an inductive bias is required that cannot be encoded (efficiently) with a classical machine.
The authors in this work provide theoretical analysis of the inductive bias of quantum machine learning models based on the spectral properties of quantum kernels along with experimental verification. The objective is to explore quantum advantages to the classical concept of inductive bias. The class of functions which is considered for this work are Reproducing kernel Hilbert Spaces (RKHS) functions, where the kernel is expressed by a quantum computation.
Interestingly, these RKHS are classically tractable in some regimes even for high-dimensional or infinite dimensional cases, which implies that such ability on a quantum device by itself does not guarantee a quantum advantage. Rather, a more strict requirement needs to be formulated.

For kernel methods, the qualitative concept of inductive bias can be formalized by analyzing the spectrum of the kernel and relating it to the target function. The results in this work show that the generalization of quantum kernel methods fails as soon as the data embedding into the quantum Hilbert space becomes expressive. By projecting the quantum kernel, it is possible to construct inductive biases that are hard to create classically; however, the fluctuations of the reduced density matrix around its mean are observed to be exponentially vanishing in the number of qubits. Since the value of the kernel would be determined by measurements, in order to attain exponential accuracy of the kernel function, exponential measurements are needed. Thus, there is the same limitation as with other QNN architectures that as the size of the quantum system (number of qubits) becomes large, the vanishing gradient problem (Barren Plateaus) is introduced again.

In contrast to other methods, the spectral properties of the RKHS determine the feasibility of learning instead of its dimensionality. To enable learning, it is required to consider models with a stronger inductive bias. The results indicate that an exponential advantage can be achieved, in a particular example case shown where one knows that the data comes from a single qubit observable and constrains the RKHS accordingly.
The results suggest that one can only achieve a quantum advantage if the data generating process is known and cannot be encoded easily by classical methods, and the information can be used to bias the quantum model.

This work provides guidance in obtaining quantum advantage on a classical dataset based on a quantum architecture known as a quantum kernel. Unfortunately, for large quantum systems quantum kernels cannot avoid the requirement of exponentially many measurements to evaluate the value of the kernel. Therefore, unless one knows how the data generation occurs, no advantage over classical algorithms can be obtained. Quantum computers with error correction can potentially assist in defining kernels with a strong bias that do not require exponentially many measurements. However, even for fully coherent quantum computers, it is still considered a challenge is to efficiently encode a strong inductive bias about a classical dataset. It is speculated that the efficiency of quantum kernels can be improved when working with quantum data instead of classical data. However, regardless of the nature of the data, it is still unclear whether utilizing QNN architectures can improve supervised learning on classical datasets.
In recent years, there have been significant developments in the field of quantum algorithms, especially in applications involving quantum chemistry and representations of molecular Hamiltonians. However, existing methods for simulating quantum chemistry (especially those leveraging simple basis functions like plane waves) are not feasible upon scaling to the continuum limit. The reason being that the majority of quantum computing algorithms for chemistry are based on second quantized simulations where the required number of qubits scales linearly with the number of spin-orbital basis functions.

To overcome this limitation, first quantized simulations of chemistry have been proposed where the idea is to track the basis state each particle is in, instead of storing the occupancy of each basis state. This is comparatively advantageous as the number of qubits needed to represent the state, scale logarithmically with the number of basis functions. Also, such simulations are highly adaptive in cases where the entanglement between the electronic and nuclear subsystems is non-negligible.

In this work, the authors analyze the finite resources required to implement two first quantized quantum algorithms for chemistry; block encodings for the qubitization and interaction picture frameworks. The objective is to compile and optimize these algorithms using a plane wave basis within an error-correctable gate set as well as developing new techniques to reduce the complexity. For qubitization of the Hamiltonian, control registers are used to select the momenta as well as the individual bits to multiply, which significantly decreases the multiplication cost. The results show that the qubitization algorithms requires much less surface code spacetime volume for simulating millions of plane waves as compared to the best second quantized algorithms require for simulating hundreds of Gaussian orbitals. In case of interaction picture based, the cost of computing the total kinetic energy is reduced by computing the sum of squares of momenta at the beginning.

The work shows that the total cost associated with the state preparation cost is reduced by a factor of 3, when one assumes that the state preparation is only needed for the kinetic energy operator, rather than using the standard amplitude amplification for the total energy. The number of bits needed for the discretization of the time intervals is also reduced by a factor of two as indicated by prior well-established methods. The main complexity is therefore focused on selecting the momenta registers. Unlike the previous approaches, only the dynamic degrees of freedom for electrons that participate in reactivity are employed by the algorithm, hence reducing encoding complexity. This approach is particularly more advantageous for a high number of electrons for the qubitization and despite the improved scaling of the interaction picture based method, the qubitization based method proves to be comparatively more practical. Also, the numerical experimentation reveals that these approaches require significantly fewer resources to reach comparable accuracy as compared to second quantized methods. Another interesting finding is that the first quantized approaches developed here, may give lower Toffoli complexities than previous work for realistic simulations of both material Hamiltonians and non-periodic molecules, suggesting more fault tolerant viability than second quantized methods.

This work provides the first explicit circuits and constant factors for simulation of any first quantized quantum algorithm, which can be a promising direction for simulating realistic materials Hamiltonians within quantum error-correction. But perhaps more impressively, the authors have also improved the constant factors; the results demonstrate reduction in circuit complexity by about a thousandfold as compared to naive implementations for modest sized systems. It also provides insights on the resources required to simulate various molecules and materials and gives the first impression of the first quantization-based algorithm for preparation of the eigenstates for phase estimation with required overlap with another eigenstate. This suggests many potential advantages in contrast to the second quantization-based algorithms, as the case in the continuum limit.

In addition to the progress made by the current work, many potential further improvements on the approach have already been proposed by the authors as well, such as modifying the algorithm to encode pseudopotential specific to the problem - relating it to bonds and chemical reactivity. Furthermore, using a non-orthogonal reciprocal lattice and efficient convergence in the thermodynamic limit might be a good direction for future research. The presented algorithms could also be adapted to more natively suit systems of reduced periodicity even thought the underlying representation is based on plane waves. The authors acknowledge relatively little is known about state-preparation in first-quantization as compared to second-quantization. Future work may investigate whether straightforward translations form the latter body of work to the current will be sufficiently efficient. Although this work focuses on preparing eigenstates for energy estimation, it can also be extended for simulation of (non-)Born-Oppenheimer dynamics.

The paper offers an interesting insight and significant progress in designing first-quantization picture quantum chemistry simulations on fault-tolerant hardware. Several strong advantages as compared to second-quantization methods are highlighted, and it will be interesting to see how this field evolves further.
Studying quantum many body systems involves predicting the properties of systems containing several quantum particles, based on the principles of quantum mechanics. In general, the many-body problem describes a large category of physical problems, with fundamental problems described as many-body systems found in quantum chemistry, condensed matter physics, and materials science. In order to simulate many-body systems, one needs to be able to efficiently prepare and evolve certain entangled multipartite states.

One of the most appealing families of multipartite states are Tensor Network States (TNS), a class of variational wave functions, in which the wave function is encoded as a tensor contraction of a network of individual tensors. They are typically used in efficient approximation of the ground states of local Hamiltonians. The best-known class of TNS is that of Matrix Product States (MPS), which corresponds to a one-dimensional geometry of TNS. Higher-dimensional generalizations of these states are known as Projected Entangled Pair States (PEPS). All of these states are characterized by the bond dimension and are the ground state of a local, frustration-free Hamiltonian. This implies that, in case of no degeneracy, one can easily check the successful preparation of the state by merely measuring a set of local observables.

There exist several natural connections between Tensor Network States and quantum computational techniques. In a recent work, related to the paper we treat today, quantum algorithms to generate a wide range of PEPS have already been introduced. However, those algorithms are based on the adiabatic method, which requires a slow adiabatic transition (limited by the size of the minimal spectral gap of the target Hamiltonian) with a computational time scaling exponentially with the number of qubits.
In this work, the authors consider a family of states on arbitrary lattices in order to generate a wide range of PEPS. The objective is to express the computation of the gap as a semidefinite programming problem and to efficiently compute lower bounds on the gap.

The authors present a class of states and their corresponding parent Hamiltonian. These states depend on two positive parameters and by construction, they can be efficiently connected to a product state as shown in this work. Also, a particular adiabatic quantum algorithm (previously proposed by the same group at the Max-Planck-Institute of Quantum Optics) is extended further to continuous time in order to increase the compatibility with analog quantum computers. This generalization ensures that states with positive lower bound values can be prepared in logarithmic time scaling as compared to existing methods, which scale polynomially. For such families of states, it is possible to predict the expectation values of many different observables, beyond those appearing as terms of the parent Hamiltonian. The lower bounds on the gap of the parent Hamiltonian can be found by utilizing a semidefinite program.

Furthermore, the authors propose verification protocols including possible complexity factors. The 3 step verification protocol starts with the verifier sending the prover instructions for preparing the state. This step consists of R rounds where in each round, the verifier sends the prover a set of observables, in return the prover prepares the state, measures the observables, and finally reports the outcome. Lastly, the verifier verifies the preparation of the state by performing certain checks on the accumulated measurement outcomes.

However, there exist certain conditions which limit the protocol from achieving a successful verification process. One possible classical ‘cheating’ strategy would be finding a way to classically sample from the correct distribution, although it is theoretically impossible for the prover to classically sample from the quantum distribution. Also, one cannot find a distribution which yields the correct values for all the expectation values, which makes reproducing impossible. These considerations imply that the proposed protocol can be used for verification in case both prover and verifier have bounded (polynomial) storage space. Moreover, the verifier uses exponential time and the verification procedure can take an exponential number of rounds. Lastly, exponential resources are needed to reproduce expectation values along with maintaining exponential accuracy. All of these limitations suggest that there does not exist a viable cheating strategy that can affect the verification implemented by the proposed protocol.

Regardless of these limitations, the suggested protocol provides great insights to explore the compatibility with analog quantum computation, which is a step beyond prior works. Furthermore, the authors are able to efficiently compute lower bounds on the adiabatic gap by translating it into a semidefinite program, which is a novel perspective. Finally, they prove that states with positive lower bound values can be prepared in logarithmic time scaling in contrast to existing methods that scale polynomially, which is important for future experimentation.
水, 12 5月 2021 12:00

Unification of Computation

Both classical and quantum computation involves a basic set of building blocks, where algorithms can be implemented to process information. In classical computation, the basic blocks are typically Boolean circuit components, while in quantum computation, the quantum circuits are realized by unitary operations on one or multiple quantum systems, for example two-state systems (qubits). A significant difference between quantum and classical algorithms is that the former performs unitary transformations that are invertible in contrast to the classical Boolean functions, which are mostly irreversible. Because of that distinction, many researchers have proposed methods to make such Boolean functions reversible. This can be achieved by first embedding the desired function into a reversible Boolean circuit and then constructing a quantum circuit realizing this invertible transform. A popular example of this construction is the well-known Shor's algorithm. Nevertheless, reversibility is not a requirement for the unification of quantum and classical algorithms. Actually, there has already been demonstrated by quantum algorithms in Hamiltonian simulation and quantum search (Grover's algorithm) that quantum speedup can be achieved without reversible Boolean functions.

The latest consensus on unification lies in the realization of irreversible, non-linear functions given the fact that the dynamical behavior of a subsystem of a quantum system can be non-unitary. Recently, a number of frameworks have been proposed to carry out such quantum subsystem transforms. One such approach is Quantum Signal Processing (QSP), which involves interleaving of two kinds of single qubit rotations and can be applied in generalized problems based on composite pulse sequences. Another promising approach is Quantum Singular Value Transformations (QSVT), which is a generalization of QSP. In QSVT, the process involves embedding a non-unitary linear operator (that governs a sub-system which can be in a mixed state) in a larger unitary operator and efficiently applies a polynomial transform to its singular values, thereby providing one lens through which one can analyze the source of quantum advantage.

In this work, the authors present an analysis of these modern approaches to quantum search, factoring, and simulation, focusing on unifying these three central quantum algorithms, in a review/overview style. The objective is to develop a framework based on QSP, establish quantum eigenvalue transformations and eventually implement QSVT for a range of problems such as search problem and phase estimation, eigenvalue threshold problem, amplitude amplification, matrix inversion and other Hamiltonian simulations.

The examples in this work show that multi-qubit problems can be simplified by identifying qubit-like subsystems, which can then be solved using QSP. While utilizing concepts like qubitization, the theorems of QSVT can be generalized for application in Amplitude Amplification and Search problem by accomplishing a polynomial transform of one specific amplitude. This polynomial transform can actually be performed over an entire vector space, not just a one-dimensional matrix element and hence can assist in quantum eigenvalue transforms. Furthermore, it is shown that QSP sequences can be used to polynomially transform all the singular values of a matrix which has been encoded into a block of a unitary matrix. The polynomial runtime shown for these algorithms suggests that QRAM based QSVT can attain a significant polynomial speedup, but achieving exponential speedup over classical algorithms is not always possible. Since QRAM based architectures require resources that grow exponentially with the numbers of the qubits, while more efficient architectures are an active area of research, it is wise to explore the alternative ways of block-encoding matrices.

Such demonstrations provide significant insights on how most of the known quantum algorithms can be constructed by simply adjusting the parameters of QSVT and thus utilizing QSVT for grand unification of quantum algorithms. So far, QSVT has shown great promise in generalizing QSP and applying a polynomial transformation to the singular values of a linear operator. Apart from the problems mentioned earlier, further applications of QSVT have been realized in the areas of quantum machine learning, quantum walks, fractional query implementation, and Gibbs state preparation. However, popular quantum algorithms such as variational algorithms like the variational quantum eigensolver or the quantum approximate optimization algorithm have not yet been constructed from QSVT-based subroutines. In the near future, it would be interesting to explore if the scope of QSVT can encompass these hybrid quantum algorithms. Lastly, an inherent goal of this work is the realization of the applications of QSVT in creating novel algorithms or extending previously known algorithms within novel noise settings. Thus, utilizing a flexible framework like QSVT can potentially bring us closer to the unification of quantum and classical algorithms.
The physical realization of quantum computers has advanced to a stage when present day quantum processors feature NISQ devices with tens of qubits. Since these devices have different benefits and drawbacks, depending on the device quality and architecture, it's highly advantageous to do a comparative analysis evaluating their performance against defined benchmarks. To date, various structured tasks have been proposed in order to measure the performance of quantum computers. Typical examples include counting the physical qubits (building blocks of digital quantum circuits) implemented in the quantum system, measuring the efficiency in terms of resources (qubits, gates, time, etc.) of preparation of absolute maximally entangled states, volumetric and mirror randomized benchmarking.

One of the first popularized performance metrics proposed (introduced by IBM) is "quantum volume", which is a single-value metric for quantum devices that quantifies how well a quantum system is capable of executing a sizeable random circuit (with circuit depth equal to qubit grid size) with reasonable fidelity. It enables the comparison of hardware with widely different performance characteristics and quantifies the complexity of algorithms that can be run on such a system. Another recent metric that was introduced by Atos is called Q-score, which counts the number of variables in a max-cut problem that a device can optimize.

Along the same lines, the authors in this work, propose a quantum benchmark suite which serves as a comparison tool for the currently available and upcoming quantum computing platforms from the perspective of an end user. The objective is to analyze the performance of the available devices by providing meaningful benchmark scores for a series of different tests. The chosen benchmarks use numerical metrics including uncertainties which can characterize different noise aspects and can allow direct comparison of the performance gains between devices. The authors present six visual benchmarks with structured circuits; Bell Test, complex transformations of the Riemann sphere, Line Drawing, Quantum Matrix Inversion, Platonic and Fractals. All these benchmarks test different aspects of the quantum hardware such as gate fidelity, readout noise, and the ability of the compilers to take full advantage of the underlying device topology, yet in a more holistic approach than the metrics introduced so-far. In this way, the authors hope to offer more information than just 1 single-dimensional meta-parameter, still in a quick glance at a visual representation.

Testing of these benchmarks was performed on currently available quantum devices from Google, IBM and Rigetti using several frameworks such as SDKs and APIs (Qiskit / IBMQ for IBM, Forest / QCS and Amazon Braket for Rigetti, Amazon Braket for IonQ, and cirq for Google).

All the devices receive a numerical score for each of the implemented tests, which can be used in cross evaluating performances. Additionally, the performance of various NISQ devices is analyzed through a series of test scenarios and the proposed metrics. The overall analysis suggests that the proposed benchmarks can be readily implemented on 2-qubit devices with circuit depths < 10 as well as currently available small scale quantum devices. These benchmarks are envisioned to be tested for larger and more complex devices that will be available in the future, therefore exploration of the scalability of such metrics is also investigated.

The scores obtained from the experimental comparisons are then compared to the estimated ideal score based on a finite number of measurements. However, one should keep in mind that these measurements also include statistical errors, due to measurement noise, which is impossible to be eliminated completely. Nevertheless, the error margins presented in this work are shown to have “expected deviation” from the ideal score. This implies that the actual experimental error margins are in agreement with the error score estimates observed in simulated experiments. The authors also find their scores to correlate well with IBM’s Quantum Volume score, although individual cases vary still.

Another crucial factor to be analyzed, are the fluctuations that are observed while simulating experimental data over a period of time. This implies a change in device performance during the simulation time which in turn affects the estimated scores causing a time variance. However, the exact estimation of this variance requires more experimentation. It would be advantageous in future experimentation to explore the temporal inhomogeneities apart from encompassing statistical uncertainty in error margins. Such benchmarks can provide a holistic evaluation including time factor when it comes to comparing different quantum devices.

One potentially major aspect of a quantum performance metric is how widespread its use is. One may have a great metric but if nobody uses it, its usefulness is low. If a metric is used by everyone, but the metric itself is of low significance, the usefulness is equally low. We hope the community can converge on something comparable, fair, and standardized, but it may take some years before that happens in this rapidly fluctuating field.
日, 25 4月 2021 12:00

Quantum Architecture Learning

There has been a large body of work investigating potential advantages of quantum computational algorithms over their classical counterparts, with the ultimate proof being an experimental verification of a "quantum advantage”. Some of the classical problems that are being targeted include factorization of large integers and unstructured database searching. While the advances in both experimental hardware and software are driving the field slowly but steadily towards quantum supremacy, more efforts are still required in both fields. Some of the most promising algorithms for potential near-term quantum advantages include the class of variational quantum algorithms (VQAs). VQAs have been applied to many scientific domains, including molecular dynamical studies, and quantum optimization problems. VQAs are also studied for various quantum machine learning (QML) applications such as regression, classification, generative modeling, deep reinforcement learning, sequence modeling, speech recognition, metric and embedding learning, transfer learning, and federated learning.

In addition to VQAs being applied as quantum implementations of classical machine learning paradigms, conversely VQAs may also themselves benefit from various machine learning paradigms, with one of the most popular being Reinforcement Learning (RL). RL has been utilized to assist in several problems in quantum information processing, such as decoding errors, quantum feedback, and adaptive code design. While so-far such schemes are envisioned with a classical computer as a RL co-processor, implementing “quantum” RL using quantum computers has been shown to make the decision-making process for RL agents quadratically faster than on classical hardware.

In this work, the authors present a new quantum architecture search framework that includes an RL agent interacting with a quantum computer or quantum simulator powered by deep reinforcement learning (DRL). The objective is to investigate the potential of training an RL agent to search for a quantum circuit architecture for generating a desired quantum state.

The proposed framework consists of two major components; a quantum computer or quantum simulator and an RL agent hosted on a classical computer that interacts with the quantum computer or quantum simulator. In each time step, the RL agent chooses an action from the possible set of actions consisting of different quantum operations (one- and two-qubit gates) thereby updating the quantum circuit. After each update, the quantum simulator executes the new circuit and calculates the fidelity to the given target state. The fidelity of the quantum circuit is then evaluated to determine the reward to be sent back to the agent; positive in case the fidelity reaches a pre-defined threshold, else negative.

The authors use the set of single-qubit Pauli measurements as the “states” or observations which the environment returns to the RL agent. The RL agent is then iteratively updated based on this information. The procedure continues until the agent reaches either the desired threshold or the maximum allowed steps. In this work, RL algorithms like A2C and PPO were used to optimize the agent. The results demonstrate that given the same neural network architecture, PPO performs significantly better than the A2C in terms of convergence speed and stability for both the case of noise-free and noisy environments. The result is shown to be consistent with the 2-qubit case.

In this work, the simulation of quantum circuits in both noise-free and noisy environments is implemented via Qiskit software from IBM. The efficiency of such an approach when it comes to large quantum circuits is quite low as the complexity scales exponentially with the number of qubits. Theoretically, a quantum circuit may approximate any quantum state (up to an error tolerance) using a finite number of gates, given a universal set of one and two-qubit quantum gates. Hence, in principle, the RL approach is valid for arbitrary large qubit size but is extremely hard to implement computationally. It would also require thousands of training episodes to verify this kind of experiment on real quantum computers. One can expect significant development in the future when quantum computing resources are more accessible. Finally, another interesting scope will be to investigate the quantum architecture search problem with different target quantum states and different noise configurations.
日, 18 4月 2021 12:00

Training NISQ QNN's

Relatively recently, an increasing interest can be observed in exploring the combination of quantum computational methods and machine learning; on one hand, traditional machine learning tools are used for improving aspects of quantum computational efforts, while on the other hand quantum computational algorithms are designed to enhance parts of machine learning strategies. While provable quantum speedups have been identified for some specific ML tasks, fault tolerant quantum computers (FTQC) are required to execute them in practice, which are not yet available. A growing body of work is now exploring quantum machine learning models implemented as parameterized quantum circuits, whose parameters are variationally optimized via a classical-quantum hybrid feedback loop. Such variational algorithms are expected to fare better even on near-term, noisy intermediate scale quantum (NISQ) computers. Amongst these architectures, quantum neural networks (QNN) are one of the most prominent ones which are used for example to learn unitaries, perform classification tasks, solve differential equations, and decrease the level of noise in quantum data.

As a side note, we would like to point out that in-principle any architecture that combines at a high level the concepts of quantum computing and artificial neural networks can be identified as a quantum neural network. In early developments, QNNs were designed by directly translating each component of a classical neural network to a suitable quantum counterpart. However, this is not always directly feasible, with the most notable example being the non-linear activation function common in classical NNs, whereas regular quantum unitary dynamics is linear. In order to introduce non-linearities, measurement, controlled decoherence or circuitry-feedback is required. With such processes one may construct a ‘quantum perceptron’. Other architectures considered to fall within the QNN category include the quantum Boltzmann machines and variationally parametrized circuits like the Quantum Approximate Optimization Algorithm (QAOA). More recently, kernel methods and nonlinear quantum feature maps are now seen as interesting alternatives to the perceptron-style QNNs. Whether any of the example architectures should or should not be called QNN is semantically interesting, but at the end of the day it is more important whether they can solve ML tasks well.

Despite their many advantages, QNN architectures still face many limitations on NISQ devices. One such limitation that is commonly encountered, is the presence of Barren plateaus when exploiting gradient-based training methods which prohibits the algorithm from finding the path towards the energy minimum due to the landscape becoming flat during the training. Also, the high noise levels in higher-depth quantum circuits limit the computational accuracy of the costs and gradients.

In this work, the authors present a comparative analysis of two `QNN architectures`, namely the Dissipative Quantum Neural Network (DQNN), whose building-block (a ‘perceptron’) is a completely positive map, and the QAOA algorithm, which are both implemented on IBM’s NISQ devices via Qiskit. The objective is to evaluate the performance of both methods while implementing certain tasks such as learning an unknown unitary operator.

In the case of DQNN, perceptron maps act on layers of different qubits, whereas the QAOA defines them as a sequence of operations on the same qubits. These networks are implemented using 6 and 4 qubits for DQNN and QAOA respectively, including the initialization and measurement. The training of the networks was executed in a hybrid manner. At each epoch, the cost was evaluated by the quantum execution, which was then used to update the parameters classically. In an ideal (noise-free) case, the training cost should always be monotonously increasing for the chosen parameters. In this work, DQNN is shown to reach higher validation costs as compared to the QAOA. Another contrasting observation is that the validation costs increase with the number of training pairs in the case of DQNN while QAOA’s validation cost is approximately uniformly distributed around the mean. The results show that both networks are capable of generalizing the available information despite the high noise levels. However, the generalization capability of DQNN is more reliable than QAOA.

The authors further evaluate and compare the noise tolerance of both of these methods. Out of the two primary sources of noise; the readout noise influences both of these networks in a similar manner. However, in the presence of gate noise, DQNN is observed to have a higher identity cost resulting in higher training and validation cost as compared to QAOA. This implies that DQNN is less susceptible to gate noise in comparison.

Overall, the work demonstrates that, although both architectures have high noise tolerance, DQNN has more potential in terms of reliability, accuracy and lesser susceptibility to noise sources as compared to QAOA when implemented on the current NISQ devices. Improving the performance of DQNN strongly correlates with the improvement of quantum hardware. As quantum hardware becomes more reliable in the near future, by lowering the levels of noise and reducing the need for high amounts of qubits due to resettable qubits, DQNN with multiple layers can be used. Such a DQNN can potentially explore problems involving higher-dimensional unitaries and non-unitary maps.
Quantum circuit complexity is one of the most crucial factors to take into consideration in quantum computation. It provides an account of computation with a minimal number of steps and time needed in order to implement a given unitary. One can associate quantum circuit complexity with the complexity involved in the preparation of a given quantum fiducial state from its initial state. For instance, a quantum state generated by a quantum chaotic Hamiltonian evolution will be highly complex, if the quantum circuit preparing it requires a lot of time on a quantum computer. When determining the overall complexity of a quantum circuit, an essential factor that needs to be accounted for, is the cost of such circuit while keeping into consideration the circuit design.

While the above factors can contribute to analyzing the complexity of quantum circuits, it is not trivial to calculate these factors quantitatively. Although there are (classical) algorithmic procedures that can find a decomposition of a unitary into a quantum circuit including Clifford and T-gates, which can be done in exponential run-time in circuit size, computing the complexity still requires optimization over these decompositions. During such decompositions, cancellations of gates occur that make the impact of a unitary gate to be partially compensated by the application of a following similar gate. One potential correlation to explore is whether the entanglement (created by quantum gates) has any impact on circuit complexity or at the cost of a unitary.

The author in this work analyzes such a relationship when both the entanglement of a state and the cost of a unitary take small values, based on how values of entangling power of quantum gates add up. It provides a simple lower bound for the cost of a quantum circuit that is tight for small values of the cost. The idea of these bounds comes from the entanglement capabilities of the quantum gates. Quantum gates that are close to the identity in operator norm have little capability to create entanglement from product states. It is however theorized that their contribution of entanglement to a given entangled state is also very little. The bound presented in this work implies that, assuming linear growth of entanglement entropies with time, cost also increases linearly.

Furthermore, for Gaussian continuous-variable systems, there is a small incremental entanglement bound as well as a harmonic equivalent of the above relationship between entanglement and quantum circuit cost. This bound can also be applied both to spin systems and Gaussian bosonic continuous-variable settings, which are important in approximating non-interacting bosonic theories. A noteworthy observation is that when a quantum many-body system undergoes non-equilibrium dynamics leading to a linear increase in the entanglement entropy, the quantum state complexity also increases.

An important result from the presented bounds, is that one can derive the required depth of a quantum circuit that can produce a given entanglement pattern in a desired final state, for either pure or mixed states. The presented bounds can also help in assessing the computational performance of analog quantum simulators and their direct comparison to their digital counterparts. One can argue that for a precisely defined computational effort, both analog and digital quantum simulators can achieve similar results. These simple bounds can provide a useful and versatile tool in various studies of such comparisons and relations.
水, 31 3月 2021 12:00

Modular photonic quantum computing

Rapid progress has been made in recent years in the development of fault-tolerant quantum computing (FTQC) - across theoretical foundations, architectural design and experimental implementations. Most of the proposed architectures are based on an array of static qubits, where relevant large-scale computation with for example superconducting qubits is expected to require vast numbers of physical qubits taking up a lot of space and control machinery. Directly translating that paradigm to photonic FTQC architectures, implies that photons serve as the ‘static qubits’ implementing gates and measurements. However, the implementation of long sequences required by FTQC protocols, becomes difficult to process as photons are short-lived, easily lost, and destroyed after measurement. This makes conventional FTQC description not suitable to photonic quantum computing.

Fusion-based quantum computing (FBQC) is an alternative to standard photon-based FTQC architectures that can overcome such limitations. In FBQC, quantum information is not stored in a static array of qubits, but periodically teleported from previously generated resource states to currently generated photons. Hence, even when the measured photons are destroyed, their quantum information is preserved and teleported accordingly. In this work, the authors present a modular and scalable architecture for FBQC, which can provide the computational power of thousands of physical qubits. The unit module of the architecture consists of a single resource-state generator (RSG), a few fusion devices, and macroscopic fiber delays with low transmission loss rates connected via waveguides and switches. The networks of such modules execute the operations by adding thousands of physical qubits to the computational Hilbert space for executing computation. The authors argue that, pragmatically, “a static qubit-based device and a dynamic RSG-based device (can be considered) equally powerful, if they can execute the same quantum computation in the same amount of time”. A single RSG is shown to be much more ‘powerful’ than a single physical qubit.

The qubits produced by RSGs are encoded as photonic qubits and are combined using a stabilizer code such as a Shor code. The photonic qubits are then transported by waveguides to n-delays, which delay (store) photons for n RSG cycles therefore acting as a fixed-time quantum memory by temporarily storing photonic qubits. This photonic memory is used to increase the number of simultaneously existing resource states available in the total computation space. Fusion devices further perform entangling fusion measurements of photon pairs that enter the device. Finally, switches reroute the incoming photonic qubits to one of the multiple outgoing waveguides. Switch settings can be adjusted in every RSG cycle, thereby deciding the operations to be performed.

In contrast to circuits in circuit-based quantum computation (CBQC), photonic FBQC uses fusion graphs to describe the execution of a specific computation. The authors review the structure of simple cubic fusion graphs using 6-ring graph states which is a six-qubit ring-shaped cluster state as resource states. Each resource state is fused with six other resource states allowing one fusion per constituent qubit of the resource state. Another direction being explored is interleaving, which involves allocating the same RSG to successively produce different fusion-graph resource states. Exploiting different arrangements of RSGs and using longer delay lines can lead to larger fusion graphs. Furthermore, it is demonstrated that interleaving modules with n-delays can increase the number of available qubits by n, but inevitably decrease the speed of logical operations by the same factor. To avoid that, it is recommended increasing the number of interleaving modules and investigating different arrangements.

These photonics-based FBQC architectures are not only modular and highly scalable, but are cost-efficient as well, since they reduce the cost of logical operations. Combining this with the interleaving approach further improves feasibility, where instead of million-qubit arrays of physical qubits, arrays of disconnected few-qubit devices can be turned into a large-scale quantum computer, provided that their qubits are photonic qubits. Such hybrid architecture repeatedly generates identical few qubit resource states from matter-based qubits, connecting them to a large-scale fault-tolerant quantum computer. Moreover, it also handles the classical processing associated with error correction along with providing high-capacity memory. As quantum technology evolves, larger number of high-quality qubits are going to be available, allowing a transition from small-scale FTQC devices to fully scalable devices. These early FTQC devices are expected to be similar in design to the current NISQ devices albeit much more powerful. Utilizing such approaches in photonic FBQC along with the developments in highly efficient photonic hardware can make the transition to large-scale fault-tolerant quantum computers a reality in the near future.
With the advent of more powerful classical computational power, machine learning and artificial intelligence research has made a recent resurgence in popularity and massive progress has been made in recent years in developing useful algorithms for practical applications. Meanwhile, quantum computing research has advanced to a stage where quantum supremacy has been shown experimentally, and theoretical algorithmic advantages in, for instance, machine learning have been theoretically proven. One particularly interesting machine learning paradigm is Reinforcement Learning (RL), where agents directly interact with an environment and learn by feedback exchanges. In recent years, RL has been utilized to assist in several problems in quantum information processing, such as decoding of errors, quantum feedback and adaptive code-design with significant success. Turned around, implementing ‘quantum’ RL using quantum computers has been shown to make the decision making process for RL agents quadratically faster than on classical hardware.

In most protocols so far, the interaction between the agent and the environment has been designed to occur entirely via classical communication in most RL applications. However, there is a theoretically suggested possibility of gaining increased quantum speedup, if this interaction can be transferred via quantum route. In this work, the authors propose a hybrid RL protocol that enables both quantum as well as classical information transfer between the agent and the environment. The main objective is to evaluate the comparative impact of this hybrid model on agent’s learning time with respect to RL schemes based on solely classical communication. The work uses a fully programmable nanophotonic processor interfaced with photons for the experimental implementation of the protocol. The setup implements an active feedback mechanism combining quantum amplitude amplification with a classical control mechanism that updates its learning policy.

The setup consists of a single-photon source pumped by laser light leading to the generation of a pair of single photons. One of these photons is sent to a quantum processor to perform a particular computation, while the other one is sent to a single-photon detector for heralding. Highly efficient detectors with short dead time response serve as fast feedback. Both detection events at the processor output and photon detector are recorded and registered with a time tagging module (TTM) as coincidence events. The agent and the environment are assigned different areas of the processor, performing the prior steps of the Grover-like amplitude amplification. The agent is further equipped with a classical control mechanism that updates its learning policy.

Any typical Grover-like algorithm faces a drop in the amplitude amplification after reaching the optimal point. Each agent reaches this optimal point at different epochs, therefore one can identify the probability up to which it is beneficial for all agents to use a quantum strategy over the classical strategy. The average number of interactions until the agent accomplishes a specific task is the learning time. The setup allows the agents to choose the most favorable strategy by switching from quantum to classical as soon as the second becomes more advantageous. Such combined strategy is shown to outperform the pure classical scenario.

Such a hybrid model represents a potentially interesting advantage over previously implemented protocols which are purely quantum or classical. Photonic architectures in particular are put forward by the authors to be one of the most suitable candidates for implementing these types of learning algorithms, by providing advantages of compactness, full tunability and low-loss communication which easily implements active feedback mechanisms for RL algorithms even over long distances. However, the theoretical implementation of such protocols is general and shown to be applicable to any quantum computational platform. Their results also demonstrate the feasibility of integrating quantum mechanical RL speed-ups in future complex quantum networks.

Finally, through the advancement of integrated optics towards the fabrication of increasingly large devices, such demonstration could be extended to more complex quantum circuits allowing for processing of high-dimensional states. This raises hopes for achieving superior performance in increasingly complex learning devices. Undoubtedly in the near future, AI and RL will play an important role in future large-scale quantum communication networks, including a potential quantum internet.
Quantum computers possess unique potential to outperform their classical counterparts with algorithms based on real-time evolution. Given the intrinsically quantum-mechanical relation between the time and energy domains, more focus is on quantum algorithms which focus on using a time-dependent perspective to solve time-independent problems. Hence, the simulation of the time-dependent Schrödinger equation is a more ideal framework to implement.

Presently, there are plenty of quantum algorithms which are based on solving the time-independent Schrödinger equation to determine the Hamiltonian eigenvalues and eigenstates. The majority of these algorithms are limited classically by the exponential scaling of Hilbert space with the system size and require considerable quantum resources to run. So far, methods such as Approximate Imaginary Time Evolution and Krylov diagonalization are more widely used in classical simulation of static phenomena than real-time evolution as the latter has computational limitations. There also exists practical limitations in getting closer to the ground state during the evolution. However, the states generated through real time evolution can provide a basis from which one can extract ground and excited states. In some cases, this method may be faster than other quantum methods that use time evolution as a subroutine for goals other than dynamical behaviour, such as using QPE for spectral analysis.

The authors in this work propose analyze variational quantum phase estimation (VQPE), a method based on real-time evolution for computing ground and excited states, using states generated by real-time evolution. The work consists of theoretical derivations using this method to solve strongly correlated Hamiltonians. The proposed VQPE method has a set of equations that specifies conditions for time evolution with a simple geometrical interpretation. These conditions decouple eigenstates out of the set of time evolved expansion states and connect the method to the classical filter diagonalization algorithm. Furthermore, the authors introduce a unitary formulation of VQPE that allows for a direct comparison to iterative phase estimation. In this formulation, the number of matrix elements that need to be measured scales linearly instead of quadratic with the number of expansion states thereby reducing the number of quantum measurements.

The authors also provide an analysis of the effects of noise on the convergence properties showing that simple regularization techniques suffice to mitigate the effects of noise. Also, the VQPE approach was demonstrated on a wide range of molecules of different complexities, simulating the algorithm classically, as well as the transverse field Ising model on IBM’s quantum simulator and hardware. For several weakly and moderately correlated molecules as well as strongly correlated transition metal dimer Cr2, the chemical accuracy for ground state energies is attained in less than ~50 real-time timesteps. This is comparatively faster than ~106 timesteps required by the state-of-the-art classical methods with orders of magnitude fewer variational parameters.

The results show VQPE as a natural and efficient quantum algorithm for ground and excited state calculations of general many-body systems. On one hand, QPE utilizes its deeper circuits to achieve Heisenberg-limited energy resolution which is comparatively more efficient for achieving high accuracy in overall run time. On the other hand, for the same number of time steps per circuit, VQPE is able to achieve higher accuracy than idealized QPE. It can be concluded that VQPE has near-term advantages as compared to long-term benefits of QPE. This makes it a better candidate for utilizing near-term hardware with shorter circuit depths and fewer available qubits being used as ancillae. By choosing an optimal time step size to generate linearly independent expansion states with each new time evolution, the variational Ansatz can be made compact which sets a lower bound to the time step size. This also minimizes the total simulation time as required by NISQ hardware. This compactness, together with their NISQ compatibility makes VQPE approaches as some of the most promising platforms to perform quantum simulations for many body systems beyond the reach of classical computations. Since real-time evolution is natural to implement on quantum hardware, this approach holds immense promise for NISQ implementation.
The “Analog quantum simulation” paradigm of quantum computing aims to develop simpler models of a complex quantum system while reproducing all the physical attributes of the system in the operational domain of interest, such as its spectrum or phase diagram. The main idea is to simulate a rather complex target Hamiltonian H using a simpler Hamiltonian H’ that can be more easily implemented on practical analog quantum-computational hardware. One of the advantages of analog quantum simulation is the expected lesser requirement of quantum error correction or precise controls. Hence, it is considered to be an important practical direction in the era of NISQ technology.

The concept of universality when seeking analog simulators is based on the existence of a Hamiltonian H’ in the family that can be used to simulate any local Hamiltonian H. General universal models such as spin-lattice model Hamiltonians can potentially be inefficient to simulate directly as they for example require interaction energy that scales exponentially with system size. This exponential scaling holds true if the original Hamiltonian has higher-dimensional, long-range, or even all-to-all interactions. In this work, the authors provide an efficient construction of these strongly universal families in which the required interaction energy and all other resources in the 2D simulator scale polynomially instead of exponentially. The scaling occurs in the size of the target Hamiltonian and precision parameters and is independent of the target’s connectivity. The work involves the conversion of the target Hamiltonian to a quantum phase estimation circuit embedded in 1D. This circuit is then mapped back to a low-degree simulating Hamiltonian, using the Feynman-Kitaev circuit to-Hamiltonian construction.

The authors extend this method to simulate any target Hamiltonian with a 1D or 2D Hamiltonian using some of the existing techniques in the literature. Combinations of techniques such as the quantum phase estimation algorithm and circuit-to-Hamiltonian transformations were used in a non-perturbative way, which allows to overcome the exponential overhead common to previous constructions. The results show that only polynomial overheads in both particle number and interaction energy are sufficient to simulate any local Hamiltonian with arbitrary connectivity by some universal Hamiltonians embedded in 1D or 2D.

This work establishes the possibility of efficient universal analog quantum simulation using simple 1D or 2D systems, which we know can be built in practice with good control. The required constructions known so-far have been far from optimal. For example, existing hardware has limited types of interactions available, so in order to consider also general interactions, these need to be simulated using those single type of interactions together with ancilla qubits placed in more than one dimension. Polynomial-sized encoding and decoding circuits can be used to simulate 1D analog Hamiltonians which can be explored further towards achieving strong universality. In this work, it is shown that strongly universal analog quantum simulation is possible and can efficiently simulate any target Hamiltonian using 1D and 2D universal systems using polynomial qubits and interaction energies, which they show is tight since it is impossible to lower interaction energy to constant. However, the encoding circuits inducing non-local correlations can affect the desirable properties of analog Hamiltonian simulations such as preservation of locality of observables, as well as considerations of noise. As an alternative approach, the translation-invariance can be relaxed by letting Hamiltonian interactions have more free parameters to encode the target Hamiltonian.

One interesting takeaway from this research is that analog quantum simulation is actually relevant for many more systems than previously thought, and digital gate-based quantum simulation may not always be the best way to go in the described cases. Further experimental realizations of analog quantum simulators are required to develop methods to simulate all physical systems and tackle classically intractable problems in a practical and efficient way.
2 / 10

私たちにどんなお手伝いができるでしょうか?
以下のお問合せフォームからご連絡ください。

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co
close