22 July 2021
###
Quantum Training of a Classical DNN

Quantum Machine Learning (QML) techniques have been rigorously researched in the past few years, showing great promise in a number of applications, using various quantum algorithmic techniques and potential speedups as compared to its classical counterparts. Many of these techniques exploit the high dimensionality of the Hilbert space through kernel methods while most exponential speedups require fault-tolerant quantum computers with access to a quantum memory (QRAM).

While previous works have successfully established quantum analogues to common classical approaches like classification, clustering, regression, and other tasks in data analysis, there has been a lack of a clear demonstration of a quantum speedup for tasks on classical datasets for existing quantum neural networks proposals. While algorithms for quantum machine learning are largely based on methods from linear algebra, neural networks rely on nonlinearity to act as a universal approximator. Given the success of deep neural networks in classical machine learning, the use of quantum computers to efficiently train classical deep neural networks remains an interesting open question.

In this work, the authors show how one may exploit the benefit of deep neural networks via the Neural Tangent Kernel (NTK) formalism, i.e. increasing the number of hidden layers L. As the neural network is deepened, the NTK matrix becomes increasingly well-conditioned, improving the speed at which gradient descent trains the neural network. The authors propose a general framework for fully connected neural network architectures via a quantum algorithm to train a wide and deep neural network under an approximation of the NTK, estimating the trained neural network output with vanishing error as the training set size increases. Furthermore, the linear form of the NTK offers a general framework for neural network architectures similar to those used in state-of-the-art applications of deep learning. The work provides two different approximations: a sparsified NTK and a diagonal NTK approximation. The diagonal NTK is defined by setting all off-diagonal elements of the NTK to zero, while the sparsified NTK is defined by only permitting off-diagonal elements to be nonzero in any row or column. For both-approximations, the matrix element bounds of the NTK guarantee the convergence of the approximation and enable efficient gradient descent which highlights the correspondence between conditions for trainable classical neural networks and an efficient quantum algorithm.

The sparsified NTK approximation has a strictly tighter upper bound given by the Gershgorin circle theorem on the error compared to the diagonal NTK approximation, suggesting the superior performance of the former. To support this theoretical result, numerical demonstration has been done using the MNIST image dataset, more accurately considering the MNIST binary image classification task between pairs of digits. Two infinite-width neural network architectures are used: i) a feedforward fully-connected neural network and ii) an architecture based on the convolutional Myrtle network. In each case, the handwritten image dataset is projected onto the surface of a unit sphere, during the initial encoding into QRAM. The results demonstrate that even relatively common data distributions satisfy the necessary conditions for efficient quantum state preparation and quantum state measurement, fully realizing an exponential quantum speedup over gradient descent.

This work demonstrates a quantum algorithm to train classical neural networks in logarithmic time O(log n) and provides numerical evidence of such efficiency on the standard MNIST image dataset. As the neural tangent kernel offers a versatile framework across architectures such as convolutional neural networks, graph neural networks, and transformers, the approximation introduced by a sparsified or diagonal kernel in this work has the potential to extend to any chaotic kernel. Also, the increase of the depth of a chaotic kernel might give rise to the kernel structure key to well-conditioning and successful approximation in logarithmic time. This can potentially open new possibilities for improved machine learning methods to quantum computing beyond the scope of classical neural networks.

While previous works have successfully established quantum analogues to common classical approaches like classification, clustering, regression, and other tasks in data analysis, there has been a lack of a clear demonstration of a quantum speedup for tasks on classical datasets for existing quantum neural networks proposals. While algorithms for quantum machine learning are largely based on methods from linear algebra, neural networks rely on nonlinearity to act as a universal approximator. Given the success of deep neural networks in classical machine learning, the use of quantum computers to efficiently train classical deep neural networks remains an interesting open question.

In this work, the authors show how one may exploit the benefit of deep neural networks via the Neural Tangent Kernel (NTK) formalism, i.e. increasing the number of hidden layers L. As the neural network is deepened, the NTK matrix becomes increasingly well-conditioned, improving the speed at which gradient descent trains the neural network. The authors propose a general framework for fully connected neural network architectures via a quantum algorithm to train a wide and deep neural network under an approximation of the NTK, estimating the trained neural network output with vanishing error as the training set size increases. Furthermore, the linear form of the NTK offers a general framework for neural network architectures similar to those used in state-of-the-art applications of deep learning. The work provides two different approximations: a sparsified NTK and a diagonal NTK approximation. The diagonal NTK is defined by setting all off-diagonal elements of the NTK to zero, while the sparsified NTK is defined by only permitting off-diagonal elements to be nonzero in any row or column. For both-approximations, the matrix element bounds of the NTK guarantee the convergence of the approximation and enable efficient gradient descent which highlights the correspondence between conditions for trainable classical neural networks and an efficient quantum algorithm.

The sparsified NTK approximation has a strictly tighter upper bound given by the Gershgorin circle theorem on the error compared to the diagonal NTK approximation, suggesting the superior performance of the former. To support this theoretical result, numerical demonstration has been done using the MNIST image dataset, more accurately considering the MNIST binary image classification task between pairs of digits. Two infinite-width neural network architectures are used: i) a feedforward fully-connected neural network and ii) an architecture based on the convolutional Myrtle network. In each case, the handwritten image dataset is projected onto the surface of a unit sphere, during the initial encoding into QRAM. The results demonstrate that even relatively common data distributions satisfy the necessary conditions for efficient quantum state preparation and quantum state measurement, fully realizing an exponential quantum speedup over gradient descent.

This work demonstrates a quantum algorithm to train classical neural networks in logarithmic time O(log n) and provides numerical evidence of such efficiency on the standard MNIST image dataset. As the neural tangent kernel offers a versatile framework across architectures such as convolutional neural networks, graph neural networks, and transformers, the approximation introduced by a sparsified or diagonal kernel in this work has the potential to extend to any chaotic kernel. Also, the increase of the depth of a chaotic kernel might give rise to the kernel structure key to well-conditioning and successful approximation in logarithmic time. This can potentially open new possibilities for improved machine learning methods to quantum computing beyond the scope of classical neural networks.

Tagged under

17 July 2021
###
Exploiting fermion number in factorized decompositions of the electronic structure Hamiltonian

In recent years, there have been significant developments in the field of quantum algorithms, especially in applications involving quantum chemistry, and representations of molecular Hamiltonians. These include variational quantum algorithms that aim to maximize the algorithmic performance despite the limited coherence times of currently available hardware. Unfortunately, that comes at the cost of introducing heuristic aspects, making it difficult to obtain rigorous performance guarantees. In contrast, quantum phase estimation approaches provide a route to accurately calculate eigenstates to small specifiable error, assuming sufficiently high overlap with the true eigenstates during the preparation of approximate eigenstates. Also, in fault-tolerant quantum computation, required resources will depend on the ability to bound errors in the algorithm which means that tighter error estimates will lead to fewer resource requirements.

Estimation of the resources required for phase estimation can be done using product-formula decomposition, also known as Trotterization. Previous works have shown that the number of fermions in the system can be used to offset the dependence of the error on the number of orbitals that can be applied to chemical systems in realistically sized basis sets. In this work, the authors present three factorized decompositions and use these in conjunction with the fermionic seminorm to obtain tighter Trotter error bounds in practice and apply it to the particular case of simulating a uniform electron gas (Jellium). Another objective of this approach is reducing the classical runtime and the gate complexity of quantum algorithms.

Each of the three factorized decompositions used in the work; namely spectral decomposition, cosine decomposition, and Cholesky decomposition, exhibits its own advantage. The spectral decomposition is generally applicable and extends beyond the plane wave dual basis. The cosine decomposition best exploits fermion number information and thus can be the most effective in the low-filling fraction regime. The Cholesky decomposition has the smallest constant factor overhead and therefore performs best in the medium and half-filling regimes.

The approach is applied to simulation of a uniform electron gas, finding substantial (over 100×) improvements in Trotter error for low-filling fraction and pushing towards much higher numbers of orbitals than is possible with existing methods. The approach was benchmarked against three prior art bounds: the analytic SHC bound, the fermionic commutator approach, and a similar Pauli commutator approach. A substantial classical runtime advantage for the calculation of the bounds was observed. In the case of fermionic and Pauli commutator approaches, the calculation of large spin-orbital number N>200 becomes intractable without access to > 100 GB of RAM. In contrast, the new bounds on a 512 spin-orbital can be calculated in less than 6 hours using < 16 GB of RAM. The work also calculates the T-bound for phase estimation of the ground state energy for Jellium, with significant improvements in gate complexity of over 10× as compared to the existing Trotter-based approaches. The gate counts in the results demonstrate that the trotter approach can comparatively perform better in specific regimes of interest than post-Trotter methods like qubitization. This holds true even while using fewer gates than qubitization for a large enough Wigner-Seitz radius.

The work focuses on second-order Trotter in the plane wave dual basis, but the techniques can be further generalized to higher-order trotter bounds. While the spectral and Cholesky decompositions are applicable in case of compact basis sets, an interesting potential lies in the cosine decomposition to obtain an even tighter bound in the low-filling fraction regime. Also, calculation of the higher-order trotter bounds would require time (scaling as O(N^7)), making it a costlier run. A potential alternative would be the post-Trotter methods, which exhibit superior asymptotic performance with respect to target error as well as being cost-effective due to the use of Hamiltonian factorizations. However, trotter methods possess good constant pre-factors in the runtime and few additional ancilla qubits requirements compared to post-Trotter methods, suggesting that trotter methods could perform comparatively better at specific tasks in the pre-asymptotic regime. It would be interesting to explore whether second quantized post-Trotter methods can similarly exploit low-filling fractions, where Trotter methods perform better.

Estimation of the resources required for phase estimation can be done using product-formula decomposition, also known as Trotterization. Previous works have shown that the number of fermions in the system can be used to offset the dependence of the error on the number of orbitals that can be applied to chemical systems in realistically sized basis sets. In this work, the authors present three factorized decompositions and use these in conjunction with the fermionic seminorm to obtain tighter Trotter error bounds in practice and apply it to the particular case of simulating a uniform electron gas (Jellium). Another objective of this approach is reducing the classical runtime and the gate complexity of quantum algorithms.

Each of the three factorized decompositions used in the work; namely spectral decomposition, cosine decomposition, and Cholesky decomposition, exhibits its own advantage. The spectral decomposition is generally applicable and extends beyond the plane wave dual basis. The cosine decomposition best exploits fermion number information and thus can be the most effective in the low-filling fraction regime. The Cholesky decomposition has the smallest constant factor overhead and therefore performs best in the medium and half-filling regimes.

The approach is applied to simulation of a uniform electron gas, finding substantial (over 100×) improvements in Trotter error for low-filling fraction and pushing towards much higher numbers of orbitals than is possible with existing methods. The approach was benchmarked against three prior art bounds: the analytic SHC bound, the fermionic commutator approach, and a similar Pauli commutator approach. A substantial classical runtime advantage for the calculation of the bounds was observed. In the case of fermionic and Pauli commutator approaches, the calculation of large spin-orbital number N>200 becomes intractable without access to > 100 GB of RAM. In contrast, the new bounds on a 512 spin-orbital can be calculated in less than 6 hours using < 16 GB of RAM. The work also calculates the T-bound for phase estimation of the ground state energy for Jellium, with significant improvements in gate complexity of over 10× as compared to the existing Trotter-based approaches. The gate counts in the results demonstrate that the trotter approach can comparatively perform better in specific regimes of interest than post-Trotter methods like qubitization. This holds true even while using fewer gates than qubitization for a large enough Wigner-Seitz radius.

The work focuses on second-order Trotter in the plane wave dual basis, but the techniques can be further generalized to higher-order trotter bounds. While the spectral and Cholesky decompositions are applicable in case of compact basis sets, an interesting potential lies in the cosine decomposition to obtain an even tighter bound in the low-filling fraction regime. Also, calculation of the higher-order trotter bounds would require time (scaling as O(N^7)), making it a costlier run. A potential alternative would be the post-Trotter methods, which exhibit superior asymptotic performance with respect to target error as well as being cost-effective due to the use of Hamiltonian factorizations. However, trotter methods possess good constant pre-factors in the runtime and few additional ancilla qubits requirements compared to post-Trotter methods, suggesting that trotter methods could perform comparatively better at specific tasks in the pre-asymptotic regime. It would be interesting to explore whether second quantized post-Trotter methods can similarly exploit low-filling fractions, where Trotter methods perform better.

Tagged under

09 July 2021
###
Quantum Evolution Kernel

Classical machine learning research is blossoming in recent years. Some of the most interesting approaches focus on studying graphs and developing algorithms to exploit the information contained in their structures. In many fields relevant to the modern world, such as chemistry, bioinformatics, social network analysis, computer vision, and natural language, one can find inherent graph structures which possess valuable information about the underlying problem structure. One way to find the similarities between various graph structures is done via graph kernels. The graph kernel approach is based on finding a way to associate any graph with a feature vector encapsulating its relevant characteristics (the feature map) and then computing the similarity between those vectors, in the form of a scalar product in the feature space.

However, the design of a graph kernel is always dependent on the problem and the specific features that need to be encapsulated. Thus, its biggest limitation is the algorithmic complexity when applied to a variety of problems. Typical examples are the graphlets subsampling and the random walk kernels. In this work, the authors propose a versatile and easily scalable graph kernel based on the time-evolution of a quantum system, where the information about a graph is encoded in the parameters of a tunable Hamiltonian acting on an array of qubits. An alternating sequence of free Hamiltonian evolution produces measurement samples that retain key features of the data.

The proposed Quantum Evolution (QE) Kernel approach is based on associating each graph with a probability distribution, obtained by the measurement of an observable on a quantum system whose dynamics are driven by the topology of the associated graph. The QE kernel between two graphs is given as a distance between their respective probability distributions. The protocol was tested on a graph classification task on several datasets and results were compared with other graph kernels. Each graph kernel is associated with a Support Vector Machine, and the accuracy is obtained via a score, namely the proportion of graphs in the test set that is attributed to the correct class. A grid search is performed in the defined range and the value with the best score is selected and averaged over 10 repetitions of the following cross-validation procedure: the dataset is first randomly split into 10 subsets of equal size, and the accuracy of the kernel is then measured on each of these 10 subsets, after having been trained on the 9 other subsets. The accuracy resulting from this split is then the average accuracy over the 10 possible choices of the test subset. The performance of the kernel is also shown to be stable against detection error and noise.

The results show that for all datasets, at least one quantum kernel is better than the best classical kernel. The depth of the time evolution is quite crucial as better accuracy is obtained by increasing the number of layers for the Ising evolution, suggesting richer features can be captured by adding more layers. However, as shown in the IMDB-MULTI and Fingerprints experiments, the larger layer Ising scheme performed worse, because of the incomplete convergence of the optimization. Bigger datasets require more function evaluations, which require a larger computational budget.

The physical implementation of the approach is presented for the case of a programmable array of qubits made of neutral atoms trapped in optical tweezers. By promoting the atoms to Rydberg states, they behave as huge electric dipoles and experience dipole-dipole interactions that map into spin Hamiltonians. The spins undergo various types of interactions depending on the Rydberg states involved in the process, leading to different Hamiltonians. This platform is limited to smaller graphs for now (~100 nodes), however, the authors point out that potentially a significant speed-up could also be gained by implementing variations of near-term quantum computing algorithms running on classical compute hardware like GPUs, in a so-called quantum-inspred manner, possibly applicable to the current approach as well. Moreover, designing the right observable and choosing the appropriate Hamiltonian are crucial parameters, in order to improve the accuracy of the design. Finally, potential improvement of such experiments lies in the use of multi-kernel learning, which combines different kernels built from the same sequence of measurements on the final states, and optimization of noise cancellation techniques.

However, the design of a graph kernel is always dependent on the problem and the specific features that need to be encapsulated. Thus, its biggest limitation is the algorithmic complexity when applied to a variety of problems. Typical examples are the graphlets subsampling and the random walk kernels. In this work, the authors propose a versatile and easily scalable graph kernel based on the time-evolution of a quantum system, where the information about a graph is encoded in the parameters of a tunable Hamiltonian acting on an array of qubits. An alternating sequence of free Hamiltonian evolution produces measurement samples that retain key features of the data.

The proposed Quantum Evolution (QE) Kernel approach is based on associating each graph with a probability distribution, obtained by the measurement of an observable on a quantum system whose dynamics are driven by the topology of the associated graph. The QE kernel between two graphs is given as a distance between their respective probability distributions. The protocol was tested on a graph classification task on several datasets and results were compared with other graph kernels. Each graph kernel is associated with a Support Vector Machine, and the accuracy is obtained via a score, namely the proportion of graphs in the test set that is attributed to the correct class. A grid search is performed in the defined range and the value with the best score is selected and averaged over 10 repetitions of the following cross-validation procedure: the dataset is first randomly split into 10 subsets of equal size, and the accuracy of the kernel is then measured on each of these 10 subsets, after having been trained on the 9 other subsets. The accuracy resulting from this split is then the average accuracy over the 10 possible choices of the test subset. The performance of the kernel is also shown to be stable against detection error and noise.

The results show that for all datasets, at least one quantum kernel is better than the best classical kernel. The depth of the time evolution is quite crucial as better accuracy is obtained by increasing the number of layers for the Ising evolution, suggesting richer features can be captured by adding more layers. However, as shown in the IMDB-MULTI and Fingerprints experiments, the larger layer Ising scheme performed worse, because of the incomplete convergence of the optimization. Bigger datasets require more function evaluations, which require a larger computational budget.

The physical implementation of the approach is presented for the case of a programmable array of qubits made of neutral atoms trapped in optical tweezers. By promoting the atoms to Rydberg states, they behave as huge electric dipoles and experience dipole-dipole interactions that map into spin Hamiltonians. The spins undergo various types of interactions depending on the Rydberg states involved in the process, leading to different Hamiltonians. This platform is limited to smaller graphs for now (~100 nodes), however, the authors point out that potentially a significant speed-up could also be gained by implementing variations of near-term quantum computing algorithms running on classical compute hardware like GPUs, in a so-called quantum-inspred manner, possibly applicable to the current approach as well. Moreover, designing the right observable and choosing the appropriate Hamiltonian are crucial parameters, in order to improve the accuracy of the design. Finally, potential improvement of such experiments lies in the use of multi-kernel learning, which combines different kernels built from the same sequence of measurements on the final states, and optimization of noise cancellation techniques.

Tagged under

28 June 2021
###
Efficient ML for quantum many-body problems

The study of quantum many body systems is one of the most significant applications of quantum computing and has applications in physics, materials science, and chemistry. Classical algorithms and methods such as density functional theory, Monte Carlo, and density-matrix renormalization group have assisted in solving some problems in many body systems but are proven inefficient when it comes to a general class of problems. The reason classical computers fall short in simulating quantum many-body physics is that describing an 𝑛-qubit quantum system accurately may require an amount of classical data that is exponential in 𝑛.

Recently, classical machine learning (ML) techniques have shown promising results in investigating problems in quantum many-body physics. Although these methods are experimentally effective with present size systems, they cannot provide the theoretical proof for efficiently solving larger system sizes, which cannot be solved by these classical algorithms yet. In this work, the authors attempt to establish classical machine learning algorithms based on the concept of a classical shadow derived from randomized Pauli measurements. The objective is to examine two potential applications of classical ML. The first application involves learning to predict classical representations of quantum many-body ground states while the second attempts to classify quantum states of matter into phases in a supervised learning scenario.

In the first application, a family of Hamiltonians is considered, where the Hamiltonian 𝐻(𝑥) depends on m real parameters. The ML algorithm is trained on a set of training data consisting of sampled values of 𝑥, each accompanied by the corresponding classical shadow for the ground state of the Hamiltonians. The ML algorithm then predicts a classical representation of the ground states for new values of 𝑥, hence estimating ground state properties using the predicted classical representation. This method is shown to be efficient for the case of predicting ground state properties that do not vary too rapidly as a function of 𝑥, with provable bounds.

In the second application, in order to predict the phase label for new quantum states that were not encountered during training, it was assumed that any two phases can be distinguished by a nonlinear function of marginal density operators of subsystems of constant size. It is shown that if such a function exists, a classical ML algorithm can learn to distinguish the phases using an amount of training data and classical processing which are polynomial in the system size. The authors also performed numerical experiments on quantum systems such as the Rydberg atom chain and 2D antiferromagnetic Heisenberg model, in order to test the efficiency of the learning and prediction phases. The numerics verify the claim that classical ML algorithms can efficiently classify a wide range of quantum phases of matter, in particular when informed by data collected in physical experiments and/or simulations on quantum computers.

Overall, the work focuses on the classical shadow formalism which assumes that each quantum state is represented by its classical shadow, that could be obtained either from a classical computation or from an experiment on a quantum device. The classical ML algorithm is then trained on labeled classical shadows, andshadows and learns to predict labels for new classical shadows. This is an interesting concept that can be potentially used for other classical representations of quantum states which could be exploited by classical ML. As an added bonus, existing highly programmable quantum simulators, which lack local controls in implementing single qubit Pauli measurements, can be benefitted from such a formalism.

An interesting outlook for quantum computing can then be predicted as follows; typically, the computational scaling of quantum computaitonal algorithms can be shown to be significantly better than classical algorithms for the same task in quantum many-body problems. However, the requirements of fault-tolerant quantum computers and other overhead implies a pre-factor that allows only a limited number of (very accurate) simulations to be performed in a reasonable timeframe. In this way, quantum computing may not help in generating huge datasets or explore exhaustively a large configurational space to design new materials, by themselves. However, when combined with Machine Learning approaches such as those presented in the work discussed today, the accurate quantum-computer generated data may replace the physical experiments required for generating the data in this strategy. That in-turn allows for a pure in-silico and more cost-effective strategy for discovery and design of new materials and understanding of (potentially new) physics.

Recently, classical machine learning (ML) techniques have shown promising results in investigating problems in quantum many-body physics. Although these methods are experimentally effective with present size systems, they cannot provide the theoretical proof for efficiently solving larger system sizes, which cannot be solved by these classical algorithms yet. In this work, the authors attempt to establish classical machine learning algorithms based on the concept of a classical shadow derived from randomized Pauli measurements. The objective is to examine two potential applications of classical ML. The first application involves learning to predict classical representations of quantum many-body ground states while the second attempts to classify quantum states of matter into phases in a supervised learning scenario.

In the first application, a family of Hamiltonians is considered, where the Hamiltonian 𝐻(𝑥) depends on m real parameters. The ML algorithm is trained on a set of training data consisting of sampled values of 𝑥, each accompanied by the corresponding classical shadow for the ground state of the Hamiltonians. The ML algorithm then predicts a classical representation of the ground states for new values of 𝑥, hence estimating ground state properties using the predicted classical representation. This method is shown to be efficient for the case of predicting ground state properties that do not vary too rapidly as a function of 𝑥, with provable bounds.

In the second application, in order to predict the phase label for new quantum states that were not encountered during training, it was assumed that any two phases can be distinguished by a nonlinear function of marginal density operators of subsystems of constant size. It is shown that if such a function exists, a classical ML algorithm can learn to distinguish the phases using an amount of training data and classical processing which are polynomial in the system size. The authors also performed numerical experiments on quantum systems such as the Rydberg atom chain and 2D antiferromagnetic Heisenberg model, in order to test the efficiency of the learning and prediction phases. The numerics verify the claim that classical ML algorithms can efficiently classify a wide range of quantum phases of matter, in particular when informed by data collected in physical experiments and/or simulations on quantum computers.

Overall, the work focuses on the classical shadow formalism which assumes that each quantum state is represented by its classical shadow, that could be obtained either from a classical computation or from an experiment on a quantum device. The classical ML algorithm is then trained on labeled classical shadows, andshadows and learns to predict labels for new classical shadows. This is an interesting concept that can be potentially used for other classical representations of quantum states which could be exploited by classical ML. As an added bonus, existing highly programmable quantum simulators, which lack local controls in implementing single qubit Pauli measurements, can be benefitted from such a formalism.

An interesting outlook for quantum computing can then be predicted as follows; typically, the computational scaling of quantum computaitonal algorithms can be shown to be significantly better than classical algorithms for the same task in quantum many-body problems. However, the requirements of fault-tolerant quantum computers and other overhead implies a pre-factor that allows only a limited number of (very accurate) simulations to be performed in a reasonable timeframe. In this way, quantum computing may not help in generating huge datasets or explore exhaustively a large configurational space to design new materials, by themselves. However, when combined with Machine Learning approaches such as those presented in the work discussed today, the accurate quantum-computer generated data may replace the physical experiments required for generating the data in this strategy. That in-turn allows for a pure in-silico and more cost-effective strategy for discovery and design of new materials and understanding of (potentially new) physics.

Tagged under

21 June 2021
###
Quantum computing 40 years later

The earliest notion of a quantum computer was conceptualized by Feynman as a “powerful computer which can solve complex problems of quantum physics and chemistry more efficiently than a classical computer”. The idea is based on the fundamental difference between quantum and classical information. More accurately, that the information carried by a quantum system that is sufficiently well isolated from its surroundings, has intrinsic features which are not shared by classical information such as randomness, uncertainty and entanglement. The vision about possible applications of such a computer got more clear with the advent of quantum algorithms like the Deutsch–Jozsa algorithm and Shor’s algorithm that provided a framework to solve large factoring problems which are hard for classical computers. It was later realized that such computing hardware is practically feasible to construct and can be scalable to large processing systems. In this historical and pedagogical overview, Preskill summarizes and comments on the past 40 years of quantum computing research and development, and what’s next for the field.

The general features of a quantum computer model, based on the quantum circuit model, are: i) a scalable number of qubits, ii) preparation of standard initial states, iii) universal set of quantum gates acting on the current state to result in desired final state, iv) a quantum circuit and v) a readout in the standard basis. A significant advantage of such a model is that the runtime for simulating an n-qubit quantum system using a classical computer rises exponentially with n, while the runtime for simulating the system on a quantum computer scales as a power of n, i.e. polynomially. Such hardware has been steadily improving throughout the current NISQ era. An era that has showcased a Google-constructed programmable quantum computer called Sycamore with 53 working qubits which executed up to 20 layers of two-qubit gates, and then measured all the qubits millions of times in just a few minutes, extracting a statistically useful signal. Furthermore, with the evolution of quantum error correction in conjunction to the novel fault-tolerant methods for executing reliable quantum computation while using noisy hardware, the NISQ systems are undoubtedly evolving to systems capable of running more and more sophisticated quantum algorithms.

Another significant comparison indicating the advancements of quantum computing is between analog and digital quantum simulation. So far, analog quantum simulators not only possess better capabilities as compared to their classical counterparts, but they are also compatible with most of the existing experimental platforms like trapped ions, superconducting circuits and trapped neutral atoms and molecules. However, they are limited by imperfect control due to the large amounts of noise. On the other hand, despite being more costly, digital quantum simulations offer greater flexibility in processing the desired Hamiltonians and preparation of the standard initial states. At the moment, a combination of both analog and digital simulation seems to be producing the best results, although the anticipation for the future is that digital simulations will be heavily used.

The progress achieved so far to make the transition from NISQ to Fault-Tolerant Quantum Computing has been driven by advances in qubit design, control technology, fabrication methods, and materials. A critical improvement that will highly affect the performance of the quantum computer, is the improvement of physical gate error rates by several orders of magnitude. For example, currently the error rate is typically 1% for entangling two-qubit gates. Alongside these developments, quantum algorithms should also continue to be explored, so that new ways of achieving quantum speedups in various problems like optimization can become feasible. It is theorized that perhaps a quadratic speedup can be achieved for optimization related problems. While researchers are already exploring challenging problems with NISQ devices like investigating properties of highly entangled many-body quantum systems, scalability of fault-tolerant quantum computers is another significant area to focus on, in order to make more powerful and efficient systems.

An interesting outlook in the paper is that quantum computing research does not only benefit the hardware and application aspects, but also the fundamental physics understanding of many-body physics, quantum information and entanglement, and even supports developments in unification of quantum models of gravity. The author concludes that from now on, “quantum computer

science and quantum physical science will advance together, hand in hand”.

The general features of a quantum computer model, based on the quantum circuit model, are: i) a scalable number of qubits, ii) preparation of standard initial states, iii) universal set of quantum gates acting on the current state to result in desired final state, iv) a quantum circuit and v) a readout in the standard basis. A significant advantage of such a model is that the runtime for simulating an n-qubit quantum system using a classical computer rises exponentially with n, while the runtime for simulating the system on a quantum computer scales as a power of n, i.e. polynomially. Such hardware has been steadily improving throughout the current NISQ era. An era that has showcased a Google-constructed programmable quantum computer called Sycamore with 53 working qubits which executed up to 20 layers of two-qubit gates, and then measured all the qubits millions of times in just a few minutes, extracting a statistically useful signal. Furthermore, with the evolution of quantum error correction in conjunction to the novel fault-tolerant methods for executing reliable quantum computation while using noisy hardware, the NISQ systems are undoubtedly evolving to systems capable of running more and more sophisticated quantum algorithms.

Another significant comparison indicating the advancements of quantum computing is between analog and digital quantum simulation. So far, analog quantum simulators not only possess better capabilities as compared to their classical counterparts, but they are also compatible with most of the existing experimental platforms like trapped ions, superconducting circuits and trapped neutral atoms and molecules. However, they are limited by imperfect control due to the large amounts of noise. On the other hand, despite being more costly, digital quantum simulations offer greater flexibility in processing the desired Hamiltonians and preparation of the standard initial states. At the moment, a combination of both analog and digital simulation seems to be producing the best results, although the anticipation for the future is that digital simulations will be heavily used.

The progress achieved so far to make the transition from NISQ to Fault-Tolerant Quantum Computing has been driven by advances in qubit design, control technology, fabrication methods, and materials. A critical improvement that will highly affect the performance of the quantum computer, is the improvement of physical gate error rates by several orders of magnitude. For example, currently the error rate is typically 1% for entangling two-qubit gates. Alongside these developments, quantum algorithms should also continue to be explored, so that new ways of achieving quantum speedups in various problems like optimization can become feasible. It is theorized that perhaps a quadratic speedup can be achieved for optimization related problems. While researchers are already exploring challenging problems with NISQ devices like investigating properties of highly entangled many-body quantum systems, scalability of fault-tolerant quantum computers is another significant area to focus on, in order to make more powerful and efficient systems.

An interesting outlook in the paper is that quantum computing research does not only benefit the hardware and application aspects, but also the fundamental physics understanding of many-body physics, quantum information and entanglement, and even supports developments in unification of quantum models of gravity. The author concludes that from now on, “quantum computer

science and quantum physical science will advance together, hand in hand”.

Tagged under

Since the recent rapid development of quantum algorithms, applications of quantum computation in the computational field of Natural Language Processing (NLP) have emerged. Words can be elegantly represented as vectors, matrices, and higher-rank tensors, depending on their grammatical function, that “contract” with each other following grammatical rules. Intuitively, one may wonder whether and where quantum computing could enhance the performance of computational NLP.

Grover’s algorithm, a well-known quantum search algorithm, allows one to find the correct item in a database, with quadratic speedup. Predicting the right answer to a language-based question can be seen from the viewpoint of a search task. The NLP computational task of natural language question answering could therefore benefit from Grover’s algorithm, an idea that is explored in the work we comment on today, which was conducted by researchers at Utrecht University, the Netherlands.

Vector based representation of language structures has been one of the main concepts in NLP, in general. Such representation can be deeply generalized and offers a flexible tool. However, to interpret text fragments considering their grammatical features, while staying in the vector space semantics, the dimension of the representation quickly scales, which has been a limiting feature in vector-based semantics implementations.

In this work, the authors show that a quantum-speedup can be exploited if each word is represented as a quantum state that serves as input to a quantum circuit. In this setting, words are represented as multi-partite quantum states which, when contracted with one another, the meaning of larger text fragments is encoded in the resulting quantum states. This modulates the problem into a search problem, which in turn can be targeted with Grover’s algorithm.

Firstly, text fragments are formed by performing grammar specific quantum measurements that correspond to the contraction of words. Taking quantum states as the representations of all input words, contractions are then identified with the expectation value of an appropriate permutation operator. With this setup, each reading of an ambiguous phrase corresponds to a particular circuit, and different readings are interchangeable upon the application of a number of swap gates. While this addresses the problem of syntactic ambiguities by making use of the quantum superposition principle, ambiguities at the word level can be immediately accommodated for by using density matrices instead of pure states.

The work also demonstrates the formulation of the answers to a “why” question. As input, the algorithm takes a multi-partite state in quantum superposition, representing a why-question and its possible answers, and performs a series of tensor contractions. A series of gates then acts repeatedly on the post-contraction state, guaranteeing that a correct answer to the question is obtained upon a single final measurement. A correct answer is provided using information given directly by the tensor contractions of representations of words without manual feeding of any labels nor to learn the answers to other questions. This translates the learning paradigm into a simple question-answer based class of problems, and presents a scalable approach.

The proposed framework can be used to find a representation for a particular question that contains all the possible answers in equal quantum superposition and allows for the building of an oracle that can detect a correct answer, while being agnostic to the specific question. One obvious advantage of this approach is also that it permits the simultaneous treatment of ambiguous phrases in English, which is allowed in a Grover search problem. Applying Grover’s algorithm to question-answering, turns the representation of the question and answers into the input of the algorithm, together with an oracle that identifies those correct answers. This construction can deal with certain types of ambiguous phrases by keeping the various meanings in quantum superposition.

Further scope of work may focus on finding an effective implementation of the measurement of the permutation operator for an arbitrary number of qubits, possibly making use of the Hadamard test, and understanding how to find a universal form of the inversion operator. This extension of the present formalism can furthermore account for a better understanding of the temporal evolution of the meanings of sentences.

Grover’s algorithm, a well-known quantum search algorithm, allows one to find the correct item in a database, with quadratic speedup. Predicting the right answer to a language-based question can be seen from the viewpoint of a search task. The NLP computational task of natural language question answering could therefore benefit from Grover’s algorithm, an idea that is explored in the work we comment on today, which was conducted by researchers at Utrecht University, the Netherlands.

Vector based representation of language structures has been one of the main concepts in NLP, in general. Such representation can be deeply generalized and offers a flexible tool. However, to interpret text fragments considering their grammatical features, while staying in the vector space semantics, the dimension of the representation quickly scales, which has been a limiting feature in vector-based semantics implementations.

In this work, the authors show that a quantum-speedup can be exploited if each word is represented as a quantum state that serves as input to a quantum circuit. In this setting, words are represented as multi-partite quantum states which, when contracted with one another, the meaning of larger text fragments is encoded in the resulting quantum states. This modulates the problem into a search problem, which in turn can be targeted with Grover’s algorithm.

Firstly, text fragments are formed by performing grammar specific quantum measurements that correspond to the contraction of words. Taking quantum states as the representations of all input words, contractions are then identified with the expectation value of an appropriate permutation operator. With this setup, each reading of an ambiguous phrase corresponds to a particular circuit, and different readings are interchangeable upon the application of a number of swap gates. While this addresses the problem of syntactic ambiguities by making use of the quantum superposition principle, ambiguities at the word level can be immediately accommodated for by using density matrices instead of pure states.

The work also demonstrates the formulation of the answers to a “why” question. As input, the algorithm takes a multi-partite state in quantum superposition, representing a why-question and its possible answers, and performs a series of tensor contractions. A series of gates then acts repeatedly on the post-contraction state, guaranteeing that a correct answer to the question is obtained upon a single final measurement. A correct answer is provided using information given directly by the tensor contractions of representations of words without manual feeding of any labels nor to learn the answers to other questions. This translates the learning paradigm into a simple question-answer based class of problems, and presents a scalable approach.

The proposed framework can be used to find a representation for a particular question that contains all the possible answers in equal quantum superposition and allows for the building of an oracle that can detect a correct answer, while being agnostic to the specific question. One obvious advantage of this approach is also that it permits the simultaneous treatment of ambiguous phrases in English, which is allowed in a Grover search problem. Applying Grover’s algorithm to question-answering, turns the representation of the question and answers into the input of the algorithm, together with an oracle that identifies those correct answers. This construction can deal with certain types of ambiguous phrases by keeping the various meanings in quantum superposition.

Further scope of work may focus on finding an effective implementation of the measurement of the permutation operator for an arbitrary number of qubits, possibly making use of the Hadamard test, and understanding how to find a universal form of the inversion operator. This extension of the present formalism can furthermore account for a better understanding of the temporal evolution of the meanings of sentences.

Tagged under

07 June 2021
###
The Inductive Bias of Quantum Kernels

A field of quantum information that has gained a lot of attention recently is Quantum Machine Learning (QML). One of the reasons that QML is considered an exciting direction is that quantum information processing promises scaling advantages over classical methods which means that usual machine learning tasks may enjoy an improved efficiency when they are carried out on a quantum computer. In early works, the primary focus of research in Quantum Machine Learning was on speeding up linear algebra subroutines, although it was later proved that they exhibit an inverse polynomial scaling of the runtime in the error, and practical use-cases are limited because of the large pre-factor overhead of FTQC requirements. An alternative approach could be using a quantum device to define and implement the function class and further optimizing on a classical computer. This can be achieved by employing Quantum Neural Networks (QNNs) or parameterized quantum circuits, however the optimization of QNNs is challenging because of the non-convex nature of the optimization landscape, and because of gradient based optimization inefficiencies known as Barren Plateaus. An approach related to the QNN that allows for convex optimization is known as a quantum kernel, which uses a predefined way of encoding the data in the quantum system and defines a quantum kernel as the inner product of two quantum states.

It is generally accepted that any given algorithm cannot efficiently solve many different types of problems, even if it can efficiently solve some of them. Likewise, a standard inductive bias in ML will suffer to learn discontinuous functions as it primarily prefers continuous functions. In order for QML models to outperform classical ML models, an inductive bias is required that cannot be encoded (efficiently) with a classical machine.

The authors in this work provide theoretical analysis of the inductive bias of quantum machine learning models based on the spectral properties of quantum kernels along with experimental verification. The objective is to explore quantum advantages to the classical concept of inductive bias. The class of functions which is considered for this work are Reproducing kernel Hilbert Spaces (RKHS) functions, where the kernel is expressed by a quantum computation.

Interestingly, these RKHS are classically tractable in some regimes even for high-dimensional or infinite dimensional cases, which implies that such ability on a quantum device by itself does not guarantee a quantum advantage. Rather, a more strict requirement needs to be formulated.

For kernel methods, the qualitative concept of inductive bias can be formalized by analyzing the spectrum of the kernel and relating it to the target function. The results in this work show that the generalization of quantum kernel methods fails as soon as the data embedding into the quantum Hilbert space becomes expressive. By projecting the quantum kernel, it is possible to construct inductive biases that are hard to create classically; however, the fluctuations of the reduced density matrix around its mean are observed to be exponentially vanishing in the number of qubits. Since the value of the kernel would be determined by measurements, in order to attain exponential accuracy of the kernel function, exponential measurements are needed. Thus, there is the same limitation as with other QNN architectures that as the size of the quantum system (number of qubits) becomes large, the vanishing gradient problem (Barren Plateaus) is introduced again.

In contrast to other methods, the spectral properties of the RKHS determine the feasibility of learning instead of its dimensionality. To enable learning, it is required to consider models with a stronger inductive bias. The results indicate that an exponential advantage can be achieved, in a particular example case shown where one knows that the data comes from a single qubit observable and constrains the RKHS accordingly.

The results suggest that one can only achieve a quantum advantage if the data generating process is known and cannot be encoded easily by classical methods, and the information can be used to bias the quantum model.

This work provides guidance in obtaining quantum advantage on a classical dataset based on a quantum architecture known as a quantum kernel. Unfortunately, for large quantum systems quantum kernels cannot avoid the requirement of exponentially many measurements to evaluate the value of the kernel. Therefore, unless one knows how the data generation occurs, no advantage over classical algorithms can be obtained. Quantum computers with error correction can potentially assist in defining kernels with a strong bias that do not require exponentially many measurements. However, even for fully coherent quantum computers, it is still considered a challenge is to efficiently encode a strong inductive bias about a classical dataset. It is speculated that the efficiency of quantum kernels can be improved when working with quantum data instead of classical data. However, regardless of the nature of the data, it is still unclear whether utilizing QNN architectures can improve supervised learning on classical datasets.

It is generally accepted that any given algorithm cannot efficiently solve many different types of problems, even if it can efficiently solve some of them. Likewise, a standard inductive bias in ML will suffer to learn discontinuous functions as it primarily prefers continuous functions. In order for QML models to outperform classical ML models, an inductive bias is required that cannot be encoded (efficiently) with a classical machine.

The authors in this work provide theoretical analysis of the inductive bias of quantum machine learning models based on the spectral properties of quantum kernels along with experimental verification. The objective is to explore quantum advantages to the classical concept of inductive bias. The class of functions which is considered for this work are Reproducing kernel Hilbert Spaces (RKHS) functions, where the kernel is expressed by a quantum computation.

Interestingly, these RKHS are classically tractable in some regimes even for high-dimensional or infinite dimensional cases, which implies that such ability on a quantum device by itself does not guarantee a quantum advantage. Rather, a more strict requirement needs to be formulated.

For kernel methods, the qualitative concept of inductive bias can be formalized by analyzing the spectrum of the kernel and relating it to the target function. The results in this work show that the generalization of quantum kernel methods fails as soon as the data embedding into the quantum Hilbert space becomes expressive. By projecting the quantum kernel, it is possible to construct inductive biases that are hard to create classically; however, the fluctuations of the reduced density matrix around its mean are observed to be exponentially vanishing in the number of qubits. Since the value of the kernel would be determined by measurements, in order to attain exponential accuracy of the kernel function, exponential measurements are needed. Thus, there is the same limitation as with other QNN architectures that as the size of the quantum system (number of qubits) becomes large, the vanishing gradient problem (Barren Plateaus) is introduced again.

In contrast to other methods, the spectral properties of the RKHS determine the feasibility of learning instead of its dimensionality. To enable learning, it is required to consider models with a stronger inductive bias. The results indicate that an exponential advantage can be achieved, in a particular example case shown where one knows that the data comes from a single qubit observable and constrains the RKHS accordingly.

The results suggest that one can only achieve a quantum advantage if the data generating process is known and cannot be encoded easily by classical methods, and the information can be used to bias the quantum model.

This work provides guidance in obtaining quantum advantage on a classical dataset based on a quantum architecture known as a quantum kernel. Unfortunately, for large quantum systems quantum kernels cannot avoid the requirement of exponentially many measurements to evaluate the value of the kernel. Therefore, unless one knows how the data generation occurs, no advantage over classical algorithms can be obtained. Quantum computers with error correction can potentially assist in defining kernels with a strong bias that do not require exponentially many measurements. However, even for fully coherent quantum computers, it is still considered a challenge is to efficiently encode a strong inductive bias about a classical dataset. It is speculated that the efficiency of quantum kernels can be improved when working with quantum data instead of classical data. However, regardless of the nature of the data, it is still unclear whether utilizing QNN architectures can improve supervised learning on classical datasets.

Tagged under

26 May 2021
###
Quantum Chemistry in First Quantization

In recent years, there have been significant developments in the field of quantum algorithms, especially in applications involving quantum chemistry and representations of molecular Hamiltonians. However, existing methods for simulating quantum chemistry (especially those leveraging simple basis functions like plane waves) are not feasible upon scaling to the continuum limit. The reason being that the majority of quantum computing algorithms for chemistry are based on second quantized simulations where the required number of qubits scales linearly with the number of spin-orbital basis functions.

To overcome this limitation, first quantized simulations of chemistry have been proposed where the idea is to track the basis state each particle is in, instead of storing the occupancy of each basis state. This is comparatively advantageous as the number of qubits needed to represent the state, scale logarithmically with the number of basis functions. Also, such simulations are highly adaptive in cases where the entanglement between the electronic and nuclear subsystems is non-negligible.

In this work, the authors analyze the finite resources required to implement two first quantized quantum algorithms for chemistry; block encodings for the qubitization and interaction picture frameworks. The objective is to compile and optimize these algorithms using a plane wave basis within an error-correctable gate set as well as developing new techniques to reduce the complexity. For qubitization of the Hamiltonian, control registers are used to select the momenta as well as the individual bits to multiply, which significantly decreases the multiplication cost. The results show that the qubitization algorithms requires much less surface code spacetime volume for simulating millions of plane waves as compared to the best second quantized algorithms require for simulating hundreds of Gaussian orbitals. In case of interaction picture based, the cost of computing the total kinetic energy is reduced by computing the sum of squares of momenta at the beginning.

The work shows that the total cost associated with the state preparation cost is reduced by a factor of 3, when one assumes that the state preparation is only needed for the kinetic energy operator, rather than using the standard amplitude amplification for the total energy. The number of bits needed for the discretization of the time intervals is also reduced by a factor of two as indicated by prior well-established methods. The main complexity is therefore focused on selecting the momenta registers. Unlike the previous approaches, only the dynamic degrees of freedom for electrons that participate in reactivity are employed by the algorithm, hence reducing encoding complexity. This approach is particularly more advantageous for a high number of electrons for the qubitization and despite the improved scaling of the interaction picture based method, the qubitization based method proves to be comparatively more practical. Also, the numerical experimentation reveals that these approaches require significantly fewer resources to reach comparable accuracy as compared to second quantized methods. Another interesting finding is that the first quantized approaches developed here, may give lower Toffoli complexities than previous work for realistic simulations of both material Hamiltonians and non-periodic molecules, suggesting more fault tolerant viability than second quantized methods.

This work provides the first explicit circuits and constant factors for simulation of any first quantized quantum algorithm, which can be a promising direction for simulating realistic materials Hamiltonians within quantum error-correction. But perhaps more impressively, the authors have also improved the constant factors; the results demonstrate reduction in circuit complexity by about a thousandfold as compared to naive implementations for modest sized systems. It also provides insights on the resources required to simulate various molecules and materials and gives the first impression of the first quantization-based algorithm for preparation of the eigenstates for phase estimation with required overlap with another eigenstate. This suggests many potential advantages in contrast to the second quantization-based algorithms, as the case in the continuum limit.

In addition to the progress made by the current work, many potential further improvements on the approach have already been proposed by the authors as well, such as modifying the algorithm to encode pseudopotential specific to the problem - relating it to bonds and chemical reactivity. Furthermore, using a non-orthogonal reciprocal lattice and efficient convergence in the thermodynamic limit might be a good direction for future research. The presented algorithms could also be adapted to more natively suit systems of reduced periodicity even thought the underlying representation is based on plane waves. The authors acknowledge relatively little is known about state-preparation in first-quantization as compared to second-quantization. Future work may investigate whether straightforward translations form the latter body of work to the current will be sufficiently efficient. Although this work focuses on preparing eigenstates for energy estimation, it can also be extended for simulation of (non-)Born-Oppenheimer dynamics.

The paper offers an interesting insight and significant progress in designing first-quantization picture quantum chemistry simulations on fault-tolerant hardware. Several strong advantages as compared to second-quantization methods are highlighted, and it will be interesting to see how this field evolves further.

To overcome this limitation, first quantized simulations of chemistry have been proposed where the idea is to track the basis state each particle is in, instead of storing the occupancy of each basis state. This is comparatively advantageous as the number of qubits needed to represent the state, scale logarithmically with the number of basis functions. Also, such simulations are highly adaptive in cases where the entanglement between the electronic and nuclear subsystems is non-negligible.

In this work, the authors analyze the finite resources required to implement two first quantized quantum algorithms for chemistry; block encodings for the qubitization and interaction picture frameworks. The objective is to compile and optimize these algorithms using a plane wave basis within an error-correctable gate set as well as developing new techniques to reduce the complexity. For qubitization of the Hamiltonian, control registers are used to select the momenta as well as the individual bits to multiply, which significantly decreases the multiplication cost. The results show that the qubitization algorithms requires much less surface code spacetime volume for simulating millions of plane waves as compared to the best second quantized algorithms require for simulating hundreds of Gaussian orbitals. In case of interaction picture based, the cost of computing the total kinetic energy is reduced by computing the sum of squares of momenta at the beginning.

The work shows that the total cost associated with the state preparation cost is reduced by a factor of 3, when one assumes that the state preparation is only needed for the kinetic energy operator, rather than using the standard amplitude amplification for the total energy. The number of bits needed for the discretization of the time intervals is also reduced by a factor of two as indicated by prior well-established methods. The main complexity is therefore focused on selecting the momenta registers. Unlike the previous approaches, only the dynamic degrees of freedom for electrons that participate in reactivity are employed by the algorithm, hence reducing encoding complexity. This approach is particularly more advantageous for a high number of electrons for the qubitization and despite the improved scaling of the interaction picture based method, the qubitization based method proves to be comparatively more practical. Also, the numerical experimentation reveals that these approaches require significantly fewer resources to reach comparable accuracy as compared to second quantized methods. Another interesting finding is that the first quantized approaches developed here, may give lower Toffoli complexities than previous work for realistic simulations of both material Hamiltonians and non-periodic molecules, suggesting more fault tolerant viability than second quantized methods.

This work provides the first explicit circuits and constant factors for simulation of any first quantized quantum algorithm, which can be a promising direction for simulating realistic materials Hamiltonians within quantum error-correction. But perhaps more impressively, the authors have also improved the constant factors; the results demonstrate reduction in circuit complexity by about a thousandfold as compared to naive implementations for modest sized systems. It also provides insights on the resources required to simulate various molecules and materials and gives the first impression of the first quantization-based algorithm for preparation of the eigenstates for phase estimation with required overlap with another eigenstate. This suggests many potential advantages in contrast to the second quantization-based algorithms, as the case in the continuum limit.

In addition to the progress made by the current work, many potential further improvements on the approach have already been proposed by the authors as well, such as modifying the algorithm to encode pseudopotential specific to the problem - relating it to bonds and chemical reactivity. Furthermore, using a non-orthogonal reciprocal lattice and efficient convergence in the thermodynamic limit might be a good direction for future research. The presented algorithms could also be adapted to more natively suit systems of reduced periodicity even thought the underlying representation is based on plane waves. The authors acknowledge relatively little is known about state-preparation in first-quantization as compared to second-quantization. Future work may investigate whether straightforward translations form the latter body of work to the current will be sufficiently efficient. Although this work focuses on preparing eigenstates for energy estimation, it can also be extended for simulation of (non-)Born-Oppenheimer dynamics.

The paper offers an interesting insight and significant progress in designing first-quantization picture quantum chemistry simulations on fault-tolerant hardware. Several strong advantages as compared to second-quantization methods are highlighted, and it will be interesting to see how this field evolves further.

Tagged under

Studying quantum many body systems involves predicting the properties of systems containing several quantum particles, based on the principles of quantum mechanics. In general, the many-body problem describes a large category of physical problems, with fundamental problems described as many-body systems found in quantum chemistry, condensed matter physics, and materials science. In order to simulate many-body systems, one needs to be able to efficiently prepare and evolve certain entangled multipartite states.

One of the most appealing families of multipartite states are Tensor Network States (TNS), a class of variational wave functions, in which the wave function is encoded as a tensor contraction of a network of individual tensors. They are typically used in efficient approximation of the ground states of local Hamiltonians. The best-known class of TNS is that of Matrix Product States (MPS), which corresponds to a one-dimensional geometry of TNS. Higher-dimensional generalizations of these states are known as Projected Entangled Pair States (PEPS). All of these states are characterized by the bond dimension and are the ground state of a local, frustration-free Hamiltonian. This implies that, in case of no degeneracy, one can easily check the successful preparation of the state by merely measuring a set of local observables.

There exist several natural connections between Tensor Network States and quantum computational techniques. In a recent work, related to the paper we treat today, quantum algorithms to generate a wide range of PEPS have already been introduced. However, those algorithms are based on the adiabatic method, which requires a slow adiabatic transition (limited by the size of the minimal spectral gap of the target Hamiltonian) with a computational time scaling exponentially with the number of qubits.

In this work, the authors consider a family of states on arbitrary lattices in order to generate a wide range of PEPS. The objective is to express the computation of the gap as a semidefinite programming problem and to efficiently compute lower bounds on the gap.

The authors present a class of states and their corresponding parent Hamiltonian. These states depend on two positive parameters and by construction, they can be efficiently connected to a product state as shown in this work. Also, a particular adiabatic quantum algorithm (previously proposed by the same group at the Max-Planck-Institute of Quantum Optics) is extended further to continuous time in order to increase the compatibility with analog quantum computers. This generalization ensures that states with positive lower bound values can be prepared in logarithmic time scaling as compared to existing methods, which scale polynomially. For such families of states, it is possible to predict the expectation values of many different observables, beyond those appearing as terms of the parent Hamiltonian. The lower bounds on the gap of the parent Hamiltonian can be found by utilizing a semidefinite program.

Furthermore, the authors propose verification protocols including possible complexity factors. The 3 step verification protocol starts with the verifier sending the prover instructions for preparing the state. This step consists of R rounds where in each round, the verifier sends the prover a set of observables, in return the prover prepares the state, measures the observables, and finally reports the outcome. Lastly, the verifier verifies the preparation of the state by performing certain checks on the accumulated measurement outcomes.

However, there exist certain conditions which limit the protocol from achieving a successful verification process. One possible classical ‘cheating’ strategy would be finding a way to classically sample from the correct distribution, although it is theoretically impossible for the prover to classically sample from the quantum distribution. Also, one cannot find a distribution which yields the correct values for all the expectation values, which makes reproducing impossible. These considerations imply that the proposed protocol can be used for verification in case both prover and verifier have bounded (polynomial) storage space. Moreover, the verifier uses exponential time and the verification procedure can take an exponential number of rounds. Lastly, exponential resources are needed to reproduce expectation values along with maintaining exponential accuracy. All of these limitations suggest that there does not exist a viable cheating strategy that can affect the verification implemented by the proposed protocol.

Regardless of these limitations, the suggested protocol provides great insights to explore the compatibility with analog quantum computation, which is a step beyond prior works. Furthermore, the authors are able to efficiently compute lower bounds on the adiabatic gap by translating it into a semidefinite program, which is a novel perspective. Finally, they prove that states with positive lower bound values can be prepared in logarithmic time scaling in contrast to existing methods that scale polynomially, which is important for future experimentation.

One of the most appealing families of multipartite states are Tensor Network States (TNS), a class of variational wave functions, in which the wave function is encoded as a tensor contraction of a network of individual tensors. They are typically used in efficient approximation of the ground states of local Hamiltonians. The best-known class of TNS is that of Matrix Product States (MPS), which corresponds to a one-dimensional geometry of TNS. Higher-dimensional generalizations of these states are known as Projected Entangled Pair States (PEPS). All of these states are characterized by the bond dimension and are the ground state of a local, frustration-free Hamiltonian. This implies that, in case of no degeneracy, one can easily check the successful preparation of the state by merely measuring a set of local observables.

There exist several natural connections between Tensor Network States and quantum computational techniques. In a recent work, related to the paper we treat today, quantum algorithms to generate a wide range of PEPS have already been introduced. However, those algorithms are based on the adiabatic method, which requires a slow adiabatic transition (limited by the size of the minimal spectral gap of the target Hamiltonian) with a computational time scaling exponentially with the number of qubits.

In this work, the authors consider a family of states on arbitrary lattices in order to generate a wide range of PEPS. The objective is to express the computation of the gap as a semidefinite programming problem and to efficiently compute lower bounds on the gap.

The authors present a class of states and their corresponding parent Hamiltonian. These states depend on two positive parameters and by construction, they can be efficiently connected to a product state as shown in this work. Also, a particular adiabatic quantum algorithm (previously proposed by the same group at the Max-Planck-Institute of Quantum Optics) is extended further to continuous time in order to increase the compatibility with analog quantum computers. This generalization ensures that states with positive lower bound values can be prepared in logarithmic time scaling as compared to existing methods, which scale polynomially. For such families of states, it is possible to predict the expectation values of many different observables, beyond those appearing as terms of the parent Hamiltonian. The lower bounds on the gap of the parent Hamiltonian can be found by utilizing a semidefinite program.

Furthermore, the authors propose verification protocols including possible complexity factors. The 3 step verification protocol starts with the verifier sending the prover instructions for preparing the state. This step consists of R rounds where in each round, the verifier sends the prover a set of observables, in return the prover prepares the state, measures the observables, and finally reports the outcome. Lastly, the verifier verifies the preparation of the state by performing certain checks on the accumulated measurement outcomes.

However, there exist certain conditions which limit the protocol from achieving a successful verification process. One possible classical ‘cheating’ strategy would be finding a way to classically sample from the correct distribution, although it is theoretically impossible for the prover to classically sample from the quantum distribution. Also, one cannot find a distribution which yields the correct values for all the expectation values, which makes reproducing impossible. These considerations imply that the proposed protocol can be used for verification in case both prover and verifier have bounded (polynomial) storage space. Moreover, the verifier uses exponential time and the verification procedure can take an exponential number of rounds. Lastly, exponential resources are needed to reproduce expectation values along with maintaining exponential accuracy. All of these limitations suggest that there does not exist a viable cheating strategy that can affect the verification implemented by the proposed protocol.

Regardless of these limitations, the suggested protocol provides great insights to explore the compatibility with analog quantum computation, which is a step beyond prior works. Furthermore, the authors are able to efficiently compute lower bounds on the adiabatic gap by translating it into a semidefinite program, which is a novel perspective. Finally, they prove that states with positive lower bound values can be prepared in logarithmic time scaling in contrast to existing methods that scale polynomially, which is important for future experimentation.

Tagged under

12 May 2021
###
Unification of Computation

Both classical and quantum computation involves a basic set of building blocks, where algorithms can be implemented to process information. In classical computation, the basic blocks are typically Boolean circuit components, while in quantum computation, the quantum circuits are realized by unitary operations on one or multiple quantum systems, for example two-state systems (qubits). A significant difference between quantum and classical algorithms is that the former performs unitary transformations that are invertible in contrast to the classical Boolean functions, which are mostly irreversible. Because of that distinction, many researchers have proposed methods to make such Boolean functions reversible. This can be achieved by first embedding the desired function into a reversible Boolean circuit and then constructing a quantum circuit realizing this invertible transform. A popular example of this construction is the well-known Shor's algorithm. Nevertheless, reversibility is not a requirement for the unification of quantum and classical algorithms. Actually, there has already been demonstrated by quantum algorithms in Hamiltonian simulation and quantum search (Grover's algorithm) that quantum speedup can be achieved without reversible Boolean functions.

The latest consensus on unification lies in the realization of irreversible, non-linear functions given the fact that the dynamical behavior of a subsystem of a quantum system can be non-unitary. Recently, a number of frameworks have been proposed to carry out such quantum subsystem transforms. One such approach is Quantum Signal Processing (QSP), which involves interleaving of two kinds of single qubit rotations and can be applied in generalized problems based on composite pulse sequences. Another promising approach is Quantum Singular Value Transformations (QSVT), which is a generalization of QSP. In QSVT, the process involves embedding a non-unitary linear operator (that governs a sub-system which can be in a mixed state) in a larger unitary operator and efficiently applies a polynomial transform to its singular values, thereby providing one lens through which one can analyze the source of quantum advantage.

In this work, the authors present an analysis of these modern approaches to quantum search, factoring, and simulation, focusing on unifying these three central quantum algorithms, in a review/overview style. The objective is to develop a framework based on QSP, establish quantum eigenvalue transformations and eventually implement QSVT for a range of problems such as search problem and phase estimation, eigenvalue threshold problem, amplitude amplification, matrix inversion and other Hamiltonian simulations.

The examples in this work show that multi-qubit problems can be simplified by identifying qubit-like subsystems, which can then be solved using QSP. While utilizing concepts like qubitization, the theorems of QSVT can be generalized for application in Amplitude Amplification and Search problem by accomplishing a polynomial transform of one specific amplitude. This polynomial transform can actually be performed over an entire vector space, not just a one-dimensional matrix element and hence can assist in quantum eigenvalue transforms. Furthermore, it is shown that QSP sequences can be used to polynomially transform all the singular values of a matrix which has been encoded into a block of a unitary matrix. The polynomial runtime shown for these algorithms suggests that QRAM based QSVT can attain a significant polynomial speedup, but achieving exponential speedup over classical algorithms is not always possible. Since QRAM based architectures require resources that grow exponentially with the numbers of the qubits, while more efficient architectures are an active area of research, it is wise to explore the alternative ways of block-encoding matrices.

Such demonstrations provide significant insights on how most of the known quantum algorithms can be constructed by simply adjusting the parameters of QSVT and thus utilizing QSVT for grand unification of quantum algorithms. So far, QSVT has shown great promise in generalizing QSP and applying a polynomial transformation to the singular values of a linear operator. Apart from the problems mentioned earlier, further applications of QSVT have been realized in the areas of quantum machine learning, quantum walks, fractional query implementation, and Gibbs state preparation. However, popular quantum algorithms such as variational algorithms like the variational quantum eigensolver or the quantum approximate optimization algorithm have not yet been constructed from QSVT-based subroutines. In the near future, it would be interesting to explore if the scope of QSVT can encompass these hybrid quantum algorithms. Lastly, an inherent goal of this work is the realization of the applications of QSVT in creating novel algorithms or extending previously known algorithms within novel noise settings. Thus, utilizing a flexible framework like QSVT can potentially bring us closer to the unification of quantum and classical algorithms.

The latest consensus on unification lies in the realization of irreversible, non-linear functions given the fact that the dynamical behavior of a subsystem of a quantum system can be non-unitary. Recently, a number of frameworks have been proposed to carry out such quantum subsystem transforms. One such approach is Quantum Signal Processing (QSP), which involves interleaving of two kinds of single qubit rotations and can be applied in generalized problems based on composite pulse sequences. Another promising approach is Quantum Singular Value Transformations (QSVT), which is a generalization of QSP. In QSVT, the process involves embedding a non-unitary linear operator (that governs a sub-system which can be in a mixed state) in a larger unitary operator and efficiently applies a polynomial transform to its singular values, thereby providing one lens through which one can analyze the source of quantum advantage.

In this work, the authors present an analysis of these modern approaches to quantum search, factoring, and simulation, focusing on unifying these three central quantum algorithms, in a review/overview style. The objective is to develop a framework based on QSP, establish quantum eigenvalue transformations and eventually implement QSVT for a range of problems such as search problem and phase estimation, eigenvalue threshold problem, amplitude amplification, matrix inversion and other Hamiltonian simulations.

The examples in this work show that multi-qubit problems can be simplified by identifying qubit-like subsystems, which can then be solved using QSP. While utilizing concepts like qubitization, the theorems of QSVT can be generalized for application in Amplitude Amplification and Search problem by accomplishing a polynomial transform of one specific amplitude. This polynomial transform can actually be performed over an entire vector space, not just a one-dimensional matrix element and hence can assist in quantum eigenvalue transforms. Furthermore, it is shown that QSP sequences can be used to polynomially transform all the singular values of a matrix which has been encoded into a block of a unitary matrix. The polynomial runtime shown for these algorithms suggests that QRAM based QSVT can attain a significant polynomial speedup, but achieving exponential speedup over classical algorithms is not always possible. Since QRAM based architectures require resources that grow exponentially with the numbers of the qubits, while more efficient architectures are an active area of research, it is wise to explore the alternative ways of block-encoding matrices.

Such demonstrations provide significant insights on how most of the known quantum algorithms can be constructed by simply adjusting the parameters of QSVT and thus utilizing QSVT for grand unification of quantum algorithms. So far, QSVT has shown great promise in generalizing QSP and applying a polynomial transformation to the singular values of a linear operator. Apart from the problems mentioned earlier, further applications of QSVT have been realized in the areas of quantum machine learning, quantum walks, fractional query implementation, and Gibbs state preparation. However, popular quantum algorithms such as variational algorithms like the variational quantum eigensolver or the quantum approximate optimization algorithm have not yet been constructed from QSVT-based subroutines. In the near future, it would be interesting to explore if the scope of QSVT can encompass these hybrid quantum algorithms. Lastly, an inherent goal of this work is the realization of the applications of QSVT in creating novel algorithms or extending previously known algorithms within novel noise settings. Thus, utilizing a flexible framework like QSVT can potentially bring us closer to the unification of quantum and classical algorithms.

Tagged under

03 May 2021
###
Benchmarking Quantum Computers: what the Q?

The physical realization of quantum computers has advanced to a stage when present day quantum processors feature NISQ devices with tens of qubits. Since these devices have different benefits and drawbacks, depending on the device quality and architecture, it's highly advantageous to do a comparative analysis evaluating their performance against defined benchmarks. To date, various structured tasks have been proposed in order to measure the performance of quantum computers. Typical examples include counting the physical qubits (building blocks of digital quantum circuits) implemented in the quantum system, measuring the efficiency in terms of resources (qubits, gates, time, etc.) of preparation of absolute maximally entangled states, volumetric and mirror randomized benchmarking.

One of the first popularized performance metrics proposed (introduced by IBM) is "quantum volume", which is a single-value metric for quantum devices that quantifies how well a quantum system is capable of executing a sizeable random circuit (with circuit depth equal to qubit grid size) with reasonable fidelity. It enables the comparison of hardware with widely different performance characteristics and quantifies the complexity of algorithms that can be run on such a system. Another recent metric that was introduced by Atos is called Q-score, which counts the number of variables in a max-cut problem that a device can optimize.

Along the same lines, the authors in this work, propose a quantum benchmark suite which serves as a comparison tool for the currently available and upcoming quantum computing platforms from the perspective of an end user. The objective is to analyze the performance of the available devices by providing meaningful benchmark scores for a series of different tests. The chosen benchmarks use numerical metrics including uncertainties which can characterize different noise aspects and can allow direct comparison of the performance gains between devices. The authors present six visual benchmarks with structured circuits; Bell Test, complex transformations of the Riemann sphere, Line Drawing, Quantum Matrix Inversion, Platonic and Fractals. All these benchmarks test different aspects of the quantum hardware such as gate fidelity, readout noise, and the ability of the compilers to take full advantage of the underlying device topology, yet in a more holistic approach than the metrics introduced so-far. In this way, the authors hope to offer more information than just 1 single-dimensional meta-parameter, still in a quick glance at a visual representation.

Testing of these benchmarks was performed on currently available quantum devices from Google, IBM and Rigetti using several frameworks such as SDKs and APIs (Qiskit / IBMQ for IBM, Forest / QCS and Amazon Braket for Rigetti, Amazon Braket for IonQ, and cirq for Google).

All the devices receive a numerical score for each of the implemented tests, which can be used in cross evaluating performances. Additionally, the performance of various NISQ devices is analyzed through a series of test scenarios and the proposed metrics. The overall analysis suggests that the proposed benchmarks can be readily implemented on 2-qubit devices with circuit depths < 10 as well as currently available small scale quantum devices. These benchmarks are envisioned to be tested for larger and more complex devices that will be available in the future, therefore exploration of the scalability of such metrics is also investigated.

The scores obtained from the experimental comparisons are then compared to the estimated ideal score based on a finite number of measurements. However, one should keep in mind that these measurements also include statistical errors, due to measurement noise, which is impossible to be eliminated completely. Nevertheless, the error margins presented in this work are shown to have “expected deviation” from the ideal score. This implies that the actual experimental error margins are in agreement with the error score estimates observed in simulated experiments. The authors also find their scores to correlate well with IBM’s Quantum Volume score, although individual cases vary still.

Another crucial factor to be analyzed, are the fluctuations that are observed while simulating experimental data over a period of time. This implies a change in device performance during the simulation time which in turn affects the estimated scores causing a time variance. However, the exact estimation of this variance requires more experimentation. It would be advantageous in future experimentation to explore the temporal inhomogeneities apart from encompassing statistical uncertainty in error margins. Such benchmarks can provide a holistic evaluation including time factor when it comes to comparing different quantum devices.

One potentially major aspect of a quantum performance metric is how widespread its use is. One may have a great metric but if nobody uses it, its usefulness is low. If a metric is used by everyone, but the metric itself is of low significance, the usefulness is equally low. We hope the community can converge on something comparable, fair, and standardized, but it may take some years before that happens in this rapidly fluctuating field.

One of the first popularized performance metrics proposed (introduced by IBM) is "quantum volume", which is a single-value metric for quantum devices that quantifies how well a quantum system is capable of executing a sizeable random circuit (with circuit depth equal to qubit grid size) with reasonable fidelity. It enables the comparison of hardware with widely different performance characteristics and quantifies the complexity of algorithms that can be run on such a system. Another recent metric that was introduced by Atos is called Q-score, which counts the number of variables in a max-cut problem that a device can optimize.

Along the same lines, the authors in this work, propose a quantum benchmark suite which serves as a comparison tool for the currently available and upcoming quantum computing platforms from the perspective of an end user. The objective is to analyze the performance of the available devices by providing meaningful benchmark scores for a series of different tests. The chosen benchmarks use numerical metrics including uncertainties which can characterize different noise aspects and can allow direct comparison of the performance gains between devices. The authors present six visual benchmarks with structured circuits; Bell Test, complex transformations of the Riemann sphere, Line Drawing, Quantum Matrix Inversion, Platonic and Fractals. All these benchmarks test different aspects of the quantum hardware such as gate fidelity, readout noise, and the ability of the compilers to take full advantage of the underlying device topology, yet in a more holistic approach than the metrics introduced so-far. In this way, the authors hope to offer more information than just 1 single-dimensional meta-parameter, still in a quick glance at a visual representation.

Testing of these benchmarks was performed on currently available quantum devices from Google, IBM and Rigetti using several frameworks such as SDKs and APIs (Qiskit / IBMQ for IBM, Forest / QCS and Amazon Braket for Rigetti, Amazon Braket for IonQ, and cirq for Google).

All the devices receive a numerical score for each of the implemented tests, which can be used in cross evaluating performances. Additionally, the performance of various NISQ devices is analyzed through a series of test scenarios and the proposed metrics. The overall analysis suggests that the proposed benchmarks can be readily implemented on 2-qubit devices with circuit depths < 10 as well as currently available small scale quantum devices. These benchmarks are envisioned to be tested for larger and more complex devices that will be available in the future, therefore exploration of the scalability of such metrics is also investigated.

The scores obtained from the experimental comparisons are then compared to the estimated ideal score based on a finite number of measurements. However, one should keep in mind that these measurements also include statistical errors, due to measurement noise, which is impossible to be eliminated completely. Nevertheless, the error margins presented in this work are shown to have “expected deviation” from the ideal score. This implies that the actual experimental error margins are in agreement with the error score estimates observed in simulated experiments. The authors also find their scores to correlate well with IBM’s Quantum Volume score, although individual cases vary still.

Another crucial factor to be analyzed, are the fluctuations that are observed while simulating experimental data over a period of time. This implies a change in device performance during the simulation time which in turn affects the estimated scores causing a time variance. However, the exact estimation of this variance requires more experimentation. It would be advantageous in future experimentation to explore the temporal inhomogeneities apart from encompassing statistical uncertainty in error margins. Such benchmarks can provide a holistic evaluation including time factor when it comes to comparing different quantum devices.

One potentially major aspect of a quantum performance metric is how widespread its use is. One may have a great metric but if nobody uses it, its usefulness is low. If a metric is used by everyone, but the metric itself is of low significance, the usefulness is equally low. We hope the community can converge on something comparable, fair, and standardized, but it may take some years before that happens in this rapidly fluctuating field.

Tagged under

25 April 2021
###
Quantum Architecture Learning

There has been a large body of work investigating potential advantages of quantum computational algorithms over their classical counterparts, with the ultimate proof being an experimental verification of a "quantum advantage”. Some of the classical problems that are being targeted include factorization of large integers and unstructured database searching. While the advances in both experimental hardware and software are driving the field slowly but steadily towards quantum supremacy, more efforts are still required in both fields. Some of the most promising algorithms for potential near-term quantum advantages include the class of variational quantum algorithms (VQAs). VQAs have been applied to many scientific domains, including molecular dynamical studies, and quantum optimization problems. VQAs are also studied for various quantum machine learning (QML) applications such as regression, classification, generative modeling, deep reinforcement learning, sequence modeling, speech recognition, metric and embedding learning, transfer learning, and federated learning.

In addition to VQAs being applied as quantum implementations of classical machine learning paradigms, conversely VQAs may also themselves benefit from various machine learning paradigms, with one of the most popular being Reinforcement Learning (RL). RL has been utilized to assist in several problems in quantum information processing, such as decoding errors, quantum feedback, and adaptive code design. While so-far such schemes are envisioned with a classical computer as a RL co-processor, implementing “quantum” RL using quantum computers has been shown to make the decision-making process for RL agents quadratically faster than on classical hardware.

In this work, the authors present a new quantum architecture search framework that includes an RL agent interacting with a quantum computer or quantum simulator powered by deep reinforcement learning (DRL). The objective is to investigate the potential of training an RL agent to search for a quantum circuit architecture for generating a desired quantum state.

The proposed framework consists of two major components; a quantum computer or quantum simulator and an RL agent hosted on a classical computer that interacts with the quantum computer or quantum simulator. In each time step, the RL agent chooses an action from the possible set of actions consisting of different quantum operations (one- and two-qubit gates) thereby updating the quantum circuit. After each update, the quantum simulator executes the new circuit and calculates the fidelity to the given target state. The fidelity of the quantum circuit is then evaluated to determine the reward to be sent back to the agent; positive in case the fidelity reaches a pre-defined threshold, else negative.

The authors use the set of single-qubit Pauli measurements as the “states” or observations which the environment returns to the RL agent. The RL agent is then iteratively updated based on this information. The procedure continues until the agent reaches either the desired threshold or the maximum allowed steps. In this work, RL algorithms like A2C and PPO were used to optimize the agent. The results demonstrate that given the same neural network architecture, PPO performs significantly better than the A2C in terms of convergence speed and stability for both the case of noise-free and noisy environments. The result is shown to be consistent with the 2-qubit case.

In this work, the simulation of quantum circuits in both noise-free and noisy environments is implemented via Qiskit software from IBM. The efficiency of such an approach when it comes to large quantum circuits is quite low as the complexity scales exponentially with the number of qubits. Theoretically, a quantum circuit may approximate any quantum state (up to an error tolerance) using a finite number of gates, given a universal set of one and two-qubit quantum gates. Hence, in principle, the RL approach is valid for arbitrary large qubit size but is extremely hard to implement computationally. It would also require thousands of training episodes to verify this kind of experiment on real quantum computers. One can expect significant development in the future when quantum computing resources are more accessible. Finally, another interesting scope will be to investigate the quantum architecture search problem with different target quantum states and different noise configurations.

In addition to VQAs being applied as quantum implementations of classical machine learning paradigms, conversely VQAs may also themselves benefit from various machine learning paradigms, with one of the most popular being Reinforcement Learning (RL). RL has been utilized to assist in several problems in quantum information processing, such as decoding errors, quantum feedback, and adaptive code design. While so-far such schemes are envisioned with a classical computer as a RL co-processor, implementing “quantum” RL using quantum computers has been shown to make the decision-making process for RL agents quadratically faster than on classical hardware.

In this work, the authors present a new quantum architecture search framework that includes an RL agent interacting with a quantum computer or quantum simulator powered by deep reinforcement learning (DRL). The objective is to investigate the potential of training an RL agent to search for a quantum circuit architecture for generating a desired quantum state.

The proposed framework consists of two major components; a quantum computer or quantum simulator and an RL agent hosted on a classical computer that interacts with the quantum computer or quantum simulator. In each time step, the RL agent chooses an action from the possible set of actions consisting of different quantum operations (one- and two-qubit gates) thereby updating the quantum circuit. After each update, the quantum simulator executes the new circuit and calculates the fidelity to the given target state. The fidelity of the quantum circuit is then evaluated to determine the reward to be sent back to the agent; positive in case the fidelity reaches a pre-defined threshold, else negative.

The authors use the set of single-qubit Pauli measurements as the “states” or observations which the environment returns to the RL agent. The RL agent is then iteratively updated based on this information. The procedure continues until the agent reaches either the desired threshold or the maximum allowed steps. In this work, RL algorithms like A2C and PPO were used to optimize the agent. The results demonstrate that given the same neural network architecture, PPO performs significantly better than the A2C in terms of convergence speed and stability for both the case of noise-free and noisy environments. The result is shown to be consistent with the 2-qubit case.

In this work, the simulation of quantum circuits in both noise-free and noisy environments is implemented via Qiskit software from IBM. The efficiency of such an approach when it comes to large quantum circuits is quite low as the complexity scales exponentially with the number of qubits. Theoretically, a quantum circuit may approximate any quantum state (up to an error tolerance) using a finite number of gates, given a universal set of one and two-qubit quantum gates. Hence, in principle, the RL approach is valid for arbitrary large qubit size but is extremely hard to implement computationally. It would also require thousands of training episodes to verify this kind of experiment on real quantum computers. One can expect significant development in the future when quantum computing resources are more accessible. Finally, another interesting scope will be to investigate the quantum architecture search problem with different target quantum states and different noise configurations.

Tagged under

- Avoiding Barren Plateaus 23 January 2022 in Blog
- Decompositional QGNN 15 January 2022 in Blog
- Quantum Capsule Networks 08 January 2022 in Blog
- Quantum Digital-Analog Fermionic Simulation 31 December 2021 in Blog
- Quantum Semidefinite Programming 19 December 2021 in Blog
- Shallow Circuits in Variational QML 14 December 2021 in Blog

Copyright © Qu & Co BV