Displaying items by tag: Machine learning

木, 22 7月 2021 15:55

Quantum Training of a Classical DNN

Quantum Machine Learning (QML) techniques have been rigorously researched in the past few years, showing great promise in a number of applications, using various quantum algorithmic techniques and potential speedups as compared to its classical counterparts. Many of these techniques exploit the high dimensionality of the Hilbert space through kernel methods while most exponential speedups require fault-tolerant quantum computers with access to a quantum memory (QRAM).

While previous works have successfully established quantum analogues to common classical approaches like classification, clustering, regression, and other tasks in data analysis, there has been a lack of a clear demonstration of a quantum speedup for tasks on classical datasets for existing quantum neural networks proposals. While algorithms for quantum machine learning are largely based on methods from linear algebra, neural networks rely on nonlinearity to act as a universal approximator. Given the success of deep neural networks in classical machine learning, the use of quantum computers to efficiently train classical deep neural networks remains an interesting open question.

In this work, the authors show how one may exploit the benefit of deep neural networks via the Neural Tangent Kernel (NTK) formalism, i.e. increasing the number of hidden layers L. As the neural network is deepened, the NTK matrix becomes increasingly well-conditioned, improving the speed at which gradient descent trains the neural network. The authors propose a general framework for fully connected neural network architectures via a quantum algorithm to train a wide and deep neural network under an approximation of the NTK, estimating the trained neural network output with vanishing error as the training set size increases. Furthermore, the linear form of the NTK offers a general framework for neural network architectures similar to those used in state-of-the-art applications of deep learning. The work provides two different approximations: a sparsified NTK and a diagonal NTK approximation. The diagonal NTK is defined by setting all off-diagonal elements of the NTK to zero, while the sparsified NTK is defined by only permitting off-diagonal elements to be nonzero in any row or column. For both-approximations, the matrix element bounds of the NTK guarantee the convergence of the approximation and enable efficient gradient descent which highlights the correspondence between conditions for trainable classical neural networks and an efficient quantum algorithm.

The sparsified NTK approximation has a strictly tighter upper bound given by the Gershgorin circle theorem on the error compared to the diagonal NTK approximation, suggesting the superior performance of the former. To support this theoretical result, numerical demonstration has been done using the MNIST image dataset, more accurately considering the MNIST binary image classification task between pairs of digits. Two infinite-width neural network architectures are used: i) a feedforward fully-connected neural network and ii) an architecture based on the convolutional Myrtle network. In each case, the handwritten image dataset is projected onto the surface of a unit sphere, during the initial encoding into QRAM. The results demonstrate that even relatively common data distributions satisfy the necessary conditions for efficient quantum state preparation and quantum state measurement, fully realizing an exponential quantum speedup over gradient descent.

This work demonstrates a quantum algorithm to train classical neural networks in logarithmic time O(log n) and provides numerical evidence of such efficiency on the standard MNIST image dataset. As the neural tangent kernel offers a versatile framework across architectures such as convolutional neural networks, graph neural networks, and transformers, the approximation introduced by a sparsified or diagonal kernel in this work has the potential to extend to any chaotic kernel. Also, the increase of the depth of a chaotic kernel might give rise to the kernel structure key to well-conditioning and successful approximation in logarithmic time. This can potentially open new possibilities for improved machine learning methods to quantum computing beyond the scope of classical neural networks.
Published in Blog
Tagged under
金, 09 7月 2021 15:53

Quantum Evolution Kernel

Classical machine learning research is blossoming in recent years. Some of the most interesting approaches focus on studying graphs and developing algorithms to exploit the information contained in their structures. In many fields relevant to the modern world, such as chemistry, bioinformatics, social network analysis, computer vision, and natural language, one can find inherent graph structures which possess valuable information about the underlying problem structure. One way to find the similarities between various graph structures is done via graph kernels. The graph kernel approach is based on finding a way to associate any graph with a feature vector encapsulating its relevant characteristics (the feature map) and then computing the similarity between those vectors, in the form of a scalar product in the feature space.

However, the design of a graph kernel is always dependent on the problem and the specific features that need to be encapsulated. Thus, its biggest limitation is the algorithmic complexity when applied to a variety of problems. Typical examples are the graphlets subsampling and the random walk kernels. In this work, the authors propose a versatile and easily scalable graph kernel based on the time-evolution of a quantum system, where the information about a graph is encoded in the parameters of a tunable Hamiltonian acting on an array of qubits. An alternating sequence of free Hamiltonian evolution produces measurement samples that retain key features of the data.

The proposed Quantum Evolution (QE) Kernel approach is based on associating each graph with a probability distribution, obtained by the measurement of an observable on a quantum system whose dynamics are driven by the topology of the associated graph. The QE kernel between two graphs is given as a distance between their respective probability distributions. The protocol was tested on a graph classification task on several datasets and results were compared with other graph kernels. Each graph kernel is associated with a Support Vector Machine, and the accuracy is obtained via a score, namely the proportion of graphs in the test set that is attributed to the correct class. A grid search is performed in the defined range and the value with the best score is selected and averaged over 10 repetitions of the following cross-validation procedure: the dataset is first randomly split into 10 subsets of equal size, and the accuracy of the kernel is then measured on each of these 10 subsets, after having been trained on the 9 other subsets. The accuracy resulting from this split is then the average accuracy over the 10 possible choices of the test subset. The performance of the kernel is also shown to be stable against detection error and noise.

The results show that for all datasets, at least one quantum kernel is better than the best classical kernel. The depth of the time evolution is quite crucial as better accuracy is obtained by increasing the number of layers for the Ising evolution, suggesting richer features can be captured by adding more layers. However, as shown in the IMDB-MULTI and Fingerprints experiments, the larger layer Ising scheme performed worse, because of the incomplete convergence of the optimization. Bigger datasets require more function evaluations, which require a larger computational budget.

The physical implementation of the approach is presented for the case of a programmable array of qubits made of neutral atoms trapped in optical tweezers. By promoting the atoms to Rydberg states, they behave as huge electric dipoles and experience dipole-dipole interactions that map into spin Hamiltonians. The spins undergo various types of interactions depending on the Rydberg states involved in the process, leading to different Hamiltonians. This platform is limited to smaller graphs for now (~100 nodes), however, the authors point out that potentially a significant speed-up could also be gained by implementing variations of near-term quantum computing algorithms running on classical compute hardware like GPUs, in a so-called quantum-inspred manner, possibly applicable to the current approach as well. Moreover, designing the right observable and choosing the appropriate Hamiltonian are crucial parameters, in order to improve the accuracy of the design. Finally, potential improvement of such experiments lies in the use of multi-kernel learning, which combines different kernels built from the same sequence of measurements on the final states, and optimization of noise cancellation techniques.
Published in Blog
Tagged under
The study of quantum many body systems is one of the most significant applications of quantum computing and has applications in physics, materials science, and chemistry. Classical algorithms and methods such as density functional theory, Monte Carlo, and density-matrix renormalization group have assisted in solving some problems in many body systems but are proven inefficient when it comes to a general class of problems. The reason classical computers fall short in simulating quantum many-body physics is that describing an 𝑛-qubit quantum system accurately may require an amount of classical data that is exponential in 𝑛.

Recently, classical machine learning (ML) techniques have shown promising results in investigating problems in quantum many-body physics. Although these methods are experimentally effective with present size systems, they cannot provide the theoretical proof for efficiently solving larger system sizes, which cannot be solved by these classical algorithms yet. In this work, the authors attempt to establish classical machine learning algorithms based on the concept of a classical shadow derived from randomized Pauli measurements. The objective is to examine two potential applications of classical ML. The first application involves learning to predict classical representations of quantum many-body ground states while the second attempts to classify quantum states of matter into phases in a supervised learning scenario.

In the first application, a family of Hamiltonians is considered, where the Hamiltonian 𝐻(𝑥) depends on m real parameters. The ML algorithm is trained on a set of training data consisting of sampled values of 𝑥, each accompanied by the corresponding classical shadow for the ground state of the Hamiltonians. The ML algorithm then predicts a classical representation of the ground states for new values of 𝑥, hence estimating ground state properties using the predicted classical representation. This method is shown to be efficient for the case of predicting ground state properties that do not vary too rapidly as a function of 𝑥, with provable bounds.

In the second application, in order to predict the phase label for new quantum states that were not encountered during training, it was assumed that any two phases can be distinguished by a nonlinear function of marginal density operators of subsystems of constant size. It is shown that if such a function exists, a classical ML algorithm can learn to distinguish the phases using an amount of training data and classical processing which are polynomial in the system size. The authors also performed numerical experiments on quantum systems such as the Rydberg atom chain and 2D antiferromagnetic Heisenberg model, in order to test the efficiency of the learning and prediction phases. The numerics verify the claim that classical ML algorithms can efficiently classify a wide range of quantum phases of matter, in particular when informed by data collected in physical experiments and/or simulations on quantum computers.

Overall, the work focuses on the classical shadow formalism which assumes that each quantum state is represented by its classical shadow, that could be obtained either from a classical computation or from an experiment on a quantum device. The classical ML algorithm is then trained on labeled classical shadows, andshadows and learns to predict labels for new classical shadows. This is an interesting concept that can be potentially used for other classical representations of quantum states which could be exploited by classical ML. As an added bonus, existing highly programmable quantum simulators, which lack local controls in implementing single qubit Pauli measurements, can be benefitted from such a formalism.

An interesting outlook for quantum computing can then be predicted as follows; typically, the computational scaling of quantum computaitonal algorithms can be shown to be significantly better than classical algorithms for the same task in quantum many-body problems. However, the requirements of fault-tolerant quantum computers and other overhead implies a pre-factor that allows only a limited number of (very accurate) simulations to be performed in a reasonable timeframe. In this way, quantum computing may not help in generating huge datasets or explore exhaustively a large configurational space to design new materials, by themselves. However, when combined with Machine Learning approaches such as those presented in the work discussed today, the accurate quantum-computer generated data may replace the physical experiments required for generating the data in this strategy. That in-turn allows for a pure in-silico and more cost-effective strategy for discovery and design of new materials and understanding of (potentially new) physics.
Published in Blog
Tagged under
Since the recent rapid development of quantum algorithms, applications of quantum computation in the computational field of Natural Language Processing (NLP) have emerged. Words can be elegantly represented as vectors, matrices, and higher-rank tensors, depending on their grammatical function, that “contract” with each other following grammatical rules. Intuitively, one may wonder whether and where quantum computing could enhance the performance of computational NLP.

Grover’s algorithm, a well-known quantum search algorithm, allows one to find the correct item in a database, with quadratic speedup. Predicting the right answer to a language-based question can be seen from the viewpoint of a search task. The NLP computational task of natural language question answering could therefore benefit from Grover’s algorithm, an idea that is explored in the work we comment on today, which was conducted by researchers at Utrecht University, the Netherlands.

Vector based representation of language structures has been one of the main concepts in NLP, in general. Such representation can be deeply generalized and offers a flexible tool. However, to interpret text fragments considering their grammatical features, while staying in the vector space semantics, the dimension of the representation quickly scales, which has been a limiting feature in vector-based semantics implementations.

In this work, the authors show that a quantum-speedup can be exploited if each word is represented as a quantum state that serves as input to a quantum circuit. In this setting, words are represented as multi-partite quantum states which, when contracted with one another, the meaning of larger text fragments is encoded in the resulting quantum states. This modulates the problem into a search problem, which in turn can be targeted with Grover’s algorithm.

Firstly, text fragments are formed by performing grammar specific quantum measurements that correspond to the contraction of words. Taking quantum states as the representations of all input words, contractions are then identified with the expectation value of an appropriate permutation operator. With this setup, each reading of an ambiguous phrase corresponds to a particular circuit, and different readings are interchangeable upon the application of a number of swap gates. While this addresses the problem of syntactic ambiguities by making use of the quantum superposition principle, ambiguities at the word level can be immediately accommodated for by using density matrices instead of pure states.

The work also demonstrates the formulation of the answers to a “why” question. As input, the algorithm takes a multi-partite state in quantum superposition, representing a why-question and its possible answers, and performs a series of tensor contractions. A series of gates then acts repeatedly on the post-contraction state, guaranteeing that a correct answer to the question is obtained upon a single final measurement. A correct answer is provided using information given directly by the tensor contractions of representations of words without manual feeding of any labels nor to learn the answers to other questions. This translates the learning paradigm into a simple question-answer based class of problems, and presents a scalable approach.

The proposed framework can be used to find a representation for a particular question that contains all the possible answers in equal quantum superposition and allows for the building of an oracle that can detect a correct answer, while being agnostic to the specific question. One obvious advantage of this approach is also that it permits the simultaneous treatment of ambiguous phrases in English, which is allowed in a Grover search problem. Applying Grover’s algorithm to question-answering, turns the representation of the question and answers into the input of the algorithm, together with an oracle that identifies those correct answers. This construction can deal with certain types of ambiguous phrases by keeping the various meanings in quantum superposition.

Further scope of work may focus on finding an effective implementation of the measurement of the permutation operator for an arbitrary number of qubits, possibly making use of the Hadamard test, and understanding how to find a universal form of the inversion operator. This extension of the present formalism can furthermore account for a better understanding of the temporal evolution of the meanings of sentences.
Published in Blog
Tagged under
月, 07 6月 2021 00:00

The Inductive Bias of Quantum Kernels

A field of quantum information that has gained a lot of attention recently is Quantum Machine Learning (QML). One of the reasons that QML is considered an exciting direction is that quantum information processing promises scaling advantages over classical methods which means that usual machine learning tasks may enjoy an improved efficiency when they are carried out on a quantum computer. In early works, the primary focus of research in Quantum Machine Learning was on speeding up linear algebra subroutines, although it was later proved that they exhibit an inverse polynomial scaling of the runtime in the error, and practical use-cases are limited because of the large pre-factor overhead of FTQC requirements. An alternative approach could be using a quantum device to define and implement the function class and further optimizing on a classical computer. This can be achieved by employing Quantum Neural Networks (QNNs) or parameterized quantum circuits, however the optimization of QNNs is challenging because of the non-convex nature of the optimization landscape, and because of gradient based optimization inefficiencies known as Barren Plateaus. An approach related to the QNN that allows for convex optimization is known as a quantum kernel, which uses a predefined way of encoding the data in the quantum system and defines a quantum kernel as the inner product of two quantum states.

It is generally accepted that any given algorithm cannot efficiently solve many different types of problems, even if it can efficiently solve some of them. Likewise, a standard inductive bias in ML will suffer to learn discontinuous functions as it primarily prefers continuous functions. In order for QML models to outperform classical ML models, an inductive bias is required that cannot be encoded (efficiently) with a classical machine.
The authors in this work provide theoretical analysis of the inductive bias of quantum machine learning models based on the spectral properties of quantum kernels along with experimental verification. The objective is to explore quantum advantages to the classical concept of inductive bias. The class of functions which is considered for this work are Reproducing kernel Hilbert Spaces (RKHS) functions, where the kernel is expressed by a quantum computation.
Interestingly, these RKHS are classically tractable in some regimes even for high-dimensional or infinite dimensional cases, which implies that such ability on a quantum device by itself does not guarantee a quantum advantage. Rather, a more strict requirement needs to be formulated.

For kernel methods, the qualitative concept of inductive bias can be formalized by analyzing the spectrum of the kernel and relating it to the target function. The results in this work show that the generalization of quantum kernel methods fails as soon as the data embedding into the quantum Hilbert space becomes expressive. By projecting the quantum kernel, it is possible to construct inductive biases that are hard to create classically; however, the fluctuations of the reduced density matrix around its mean are observed to be exponentially vanishing in the number of qubits. Since the value of the kernel would be determined by measurements, in order to attain exponential accuracy of the kernel function, exponential measurements are needed. Thus, there is the same limitation as with other QNN architectures that as the size of the quantum system (number of qubits) becomes large, the vanishing gradient problem (Barren Plateaus) is introduced again.

In contrast to other methods, the spectral properties of the RKHS determine the feasibility of learning instead of its dimensionality. To enable learning, it is required to consider models with a stronger inductive bias. The results indicate that an exponential advantage can be achieved, in a particular example case shown where one knows that the data comes from a single qubit observable and constrains the RKHS accordingly.
The results suggest that one can only achieve a quantum advantage if the data generating process is known and cannot be encoded easily by classical methods, and the information can be used to bias the quantum model.

This work provides guidance in obtaining quantum advantage on a classical dataset based on a quantum architecture known as a quantum kernel. Unfortunately, for large quantum systems quantum kernels cannot avoid the requirement of exponentially many measurements to evaluate the value of the kernel. Therefore, unless one knows how the data generation occurs, no advantage over classical algorithms can be obtained. Quantum computers with error correction can potentially assist in defining kernels with a strong bias that do not require exponentially many measurements. However, even for fully coherent quantum computers, it is still considered a challenge is to efficiently encode a strong inductive bias about a classical dataset. It is speculated that the efficiency of quantum kernels can be improved when working with quantum data instead of classical data. However, regardless of the nature of the data, it is still unclear whether utilizing QNN architectures can improve supervised learning on classical datasets.
Published in Blog
Tagged under
日, 25 4月 2021 12:00

Quantum Architecture Learning

There has been a large body of work investigating potential advantages of quantum computational algorithms over their classical counterparts, with the ultimate proof being an experimental verification of a "quantum advantage”. Some of the classical problems that are being targeted include factorization of large integers and unstructured database searching. While the advances in both experimental hardware and software are driving the field slowly but steadily towards quantum supremacy, more efforts are still required in both fields. Some of the most promising algorithms for potential near-term quantum advantages include the class of variational quantum algorithms (VQAs). VQAs have been applied to many scientific domains, including molecular dynamical studies, and quantum optimization problems. VQAs are also studied for various quantum machine learning (QML) applications such as regression, classification, generative modeling, deep reinforcement learning, sequence modeling, speech recognition, metric and embedding learning, transfer learning, and federated learning.

In addition to VQAs being applied as quantum implementations of classical machine learning paradigms, conversely VQAs may also themselves benefit from various machine learning paradigms, with one of the most popular being Reinforcement Learning (RL). RL has been utilized to assist in several problems in quantum information processing, such as decoding errors, quantum feedback, and adaptive code design. While so-far such schemes are envisioned with a classical computer as a RL co-processor, implementing “quantum” RL using quantum computers has been shown to make the decision-making process for RL agents quadratically faster than on classical hardware.

In this work, the authors present a new quantum architecture search framework that includes an RL agent interacting with a quantum computer or quantum simulator powered by deep reinforcement learning (DRL). The objective is to investigate the potential of training an RL agent to search for a quantum circuit architecture for generating a desired quantum state.

The proposed framework consists of two major components; a quantum computer or quantum simulator and an RL agent hosted on a classical computer that interacts with the quantum computer or quantum simulator. In each time step, the RL agent chooses an action from the possible set of actions consisting of different quantum operations (one- and two-qubit gates) thereby updating the quantum circuit. After each update, the quantum simulator executes the new circuit and calculates the fidelity to the given target state. The fidelity of the quantum circuit is then evaluated to determine the reward to be sent back to the agent; positive in case the fidelity reaches a pre-defined threshold, else negative.

The authors use the set of single-qubit Pauli measurements as the “states” or observations which the environment returns to the RL agent. The RL agent is then iteratively updated based on this information. The procedure continues until the agent reaches either the desired threshold or the maximum allowed steps. In this work, RL algorithms like A2C and PPO were used to optimize the agent. The results demonstrate that given the same neural network architecture, PPO performs significantly better than the A2C in terms of convergence speed and stability for both the case of noise-free and noisy environments. The result is shown to be consistent with the 2-qubit case.

In this work, the simulation of quantum circuits in both noise-free and noisy environments is implemented via Qiskit software from IBM. The efficiency of such an approach when it comes to large quantum circuits is quite low as the complexity scales exponentially with the number of qubits. Theoretically, a quantum circuit may approximate any quantum state (up to an error tolerance) using a finite number of gates, given a universal set of one and two-qubit quantum gates. Hence, in principle, the RL approach is valid for arbitrary large qubit size but is extremely hard to implement computationally. It would also require thousands of training episodes to verify this kind of experiment on real quantum computers. One can expect significant development in the future when quantum computing resources are more accessible. Finally, another interesting scope will be to investigate the quantum architecture search problem with different target quantum states and different noise configurations.
Published in Blog
Tagged under
Enhancing machine learning applications with quantum computing is currently being massively investigated, since it might prove quantum advantage during the NISQ era. Enhancement is typically performed through improvement of the training process of existing classical models or enhancing inference in graphical models. Another way is through the construction of quantum models that generate correlations between variables that are inefficient to represent through classical computation (e.g. quantum neural networks). If the model leverages a quantum circuit that is hard to sample results from classically, then there is potential for a quantum advantage.

The main message of this work is to show quantitatively that when classical models are provided with some training data, even if those were obtained from a quantum model that cannot be computed classically, they can reach similar performance as the quantum model. The authors provide rigorous prediction error bounds for training classical and quantum ML methods based on kernel functions in order to learn quantum mechanical models. It has been proven that kernel methods provide provable guarantees, but are also very flexible in the functions they can learn.

The main contribution of the author’s work lies in the development of quantum kernels and the implementation of a guidebook that generates ML problems which give a large separation between quantum and classical models. The use of prediction error bounds quantifies the separation between prediction errors of quantum and classical ML models for a fixed amount of training data. Typically, the comparison is done based on the geometric difference defined by the closest efficient classical ML model, however, in practice, one should consider the geometric difference with respect to a suite of optimized classical ML models. When the geometric difference is small then a classical ML method is guaranteed to provide similar or better performance in prediction on the data set, independent of the function values or labels. When the geometry differs greatly, there exists a data set that exhibits large prediction advantage using the quantum ML model.

The small geometric difference is a consequence of the exponentially large Hilbert space employed by existing quantum models, where all inputs are too far apart. To circumvent the setback, the authors propose an improvement which enlarges the geometric difference by projecting quantum states embedded from classical data back to approximate classical representation. This proposal allows for the construction of a data set that demonstrates large prediction advantage over common classical ML models in numerical experiments up to 30 qubits. In that way, one can use a small quantum computer to generate efficiently verifiable ML problems that could be challenging for classical ML models.
Published in Blog
Tagged under
There is currently a big effort in proving the existence of quantum advantage in NISQ devices, with typical applications including machine learning due to its wide applicability. However, an argument has been raised stipulating whether the advantage being proven in such applications arises from the nature of quantum computing or the way the data is provided. Actually, there are recent works which have shown that, when classical algorithms have an analogous sampling access to data, then they can reach similar performance as their quantum counterparts. Therefore, a fair study concerning quantum advantage in such a setting should only assume classical access to data for both the classical and quantum algorithm.

Most quantum algorithms being explored at the moment are of the variational nature, where a circuit is selected from a parametrized circuit family via classical optimization. However, such algorithms are heuristic and there is no formal evidence that they exhibit a genuine quantum advantage. In their work, the authors describe an exponential quantum speedup via the use of a quantum-enhanced feature space, where each data point is mapped non-linearly to a quantum state and then classified by a linear classifier in the high dimensional Hilbert space. The classification is achieved through a support vector machine (SVM), whose kernel matrix is obtained by measuring the pairwise inner product of the feature states, known as quantum kernel estimation (QKE). This kernel matrix is then given to a classical optimizer that efficiently finds the linear classifier that optimally separates the training data in feature space by running a convex quadratic program.

The advantage of such a quantum learner comes from the ability to recognize classically intractable complex patterns, using a feature map. This implementation combines the ideas of the generalization of soft margin classifiers and rigorously bounding the misclassification error of the SVM-QKE algorithm, thereby providing quantum advantage for the quantum classifier with guaranteed high accuracy. Furthermore, it is proven that this SVM classifier can learn the optimal separator in the exponentially large feature space, while also making it robust against additive sampling errors due to the error mitigation techniques that were used.

Finally, the classification problem showing the exponential quantum speed-up was based on the discrete logarithm problem (DLP) and it is proven that no efficient classical algorithm can achieve an accuracy that is inverse-polynomially better than random guessing, assuming the widely-believed classical hardness of DLP.

The results from this paper are relevant to a broader class of quantum machine learning algorithms exploiting feature maps and which aim to avoid the input-output problem. Although this particular problem is not practically motivated, these results set a positive theoretical foundation for the search of practical quantum advantage in machine learning.
Published in Blog
Tagged under
A prospective application of quantum computing is solving quantum chemistry problems, however, obtaining exact solutions is difficult due to the lack of general method of obtaining such solutions. Typically the solution lies in finding the ground state energy, even though the energy is not descriptive enough to fully characterize all desired properties of a system. In order to find such properties, many measurements of the wavefunction are required. These measurements are expensive, because the wavefunction cannot be copied and must often be re-prepared before a second measurement is performed. Finding the full wavefunction would require exponentially many measurements, so one option would be to encode many solutions into one measurement by using a machine learned (ML) model. Training of the ML model requires finding exact quantities at several different external potentials. Besides that, one can use density functional theory (DFT), to replace the wavefunction with the one-body density, n(r), which has fewer variables. When DFT is used, instead of the Hamiltonian, the universal functional, F[n], must be found. The quantities required for the classical user to find self-consistent solutions are the exact functional (determining the energy) and the functional derivative. So, in addition to finding F[n], one also must find some other quantity such as the density, n(r), or the Kohn-Sham(KS) potential, vs(r). With these components, one can fully characterize a quantum ground state and solve for other measurable quantities.

The authors propose an algorithm that finds the ML model for F[n] on the quantum computer if a ground-state wavefunction is available. The algorithm leaves the wavefunction largely undisturbed so it can be used as the starting state for another system, greatly reducing the pre-factor required to solve other systems through a quantum counting algorithm to extract descriptive quantities such as the density. Most of this algorithm is kept entirely on the quantum computer to motivate future improvements for speed, but the counting algorithm does allow for information to be output classically. Moreover, the authors demonstrate that the exact Kohn-Sham potential can be solved in a faster way with a gradient evaluated on a cost function for the Kohn-Sham system.

The novelty of the proposed algorithm suggested in this work is the limitation of the number of measurements and re-preparations of the wavefunction especially in the case of time-dependent quantities. Since there is no algorithm for the general case that is exponentially better than the proposed algorithm, limiting the number of measurements and re-preparations of the wavefunction is as best as one can achieve. The proposed algorithm is a combination of several known algorithms including quantum phase estimation, quantum amplitude estimation, and quantum gradient methods that are iteratively used to train a machine learned model.
Published in Blog

Qu&Co comments on this publication:

Quantum computers are envisioned to have significantly higher computational capabilities compared to their classical counterparts, especially for optimization and machine learning problems that involve a large classical data set. However, existing quantum algorithms use the trivial methods of turning large classical datasets into either quantum oracles or quantum states which are so expensive that negate any possible quantum advantage. Such quantum algorithms focus at problems in which classical runtime scales rapidly with the input size, perhaps exponentially. To be able to achieve quantum speedup with algorithms like Grover search, a “quantum RAM” is proposed, which is a large classical memory that can be queried in superposition. Although quantum RAMs do not yet exist and creating one might encounter the same challenges that quantum computer hardware faces, it has the potential to provide significant speedup to applications like the k-means clustering, logistical regression, zero-sum games and boosting.

This paper introduces hybrid classical-quantum algorithms for problems involving a large classical data set X and a space of models Y such that a quantum computer has superposition access to Y but not X. Then a data reduction technique is used to construct a weighted subset of X called a coreset that yields approximately the same loss for each model. The coreset can be constructed either by a classical computer or by the combination of classical – quantum computer by utilization of quantum measurements.

The main message of this work is that in order to avoid losing quantum speedup for ‘big-data’ applications, techniques such as data reduction are required, so that the time for loading and storing the data set is limited. Also, non-fault tolerant quantum algorithms should be designed in a way that does not require an excessive amount of gates, so that the algorithm can run before qubits lose their coherence and invalidate the result. The goal of the paper is to draw attention to problems that arise from such actions like testing for quantum advantage when data reduction is used, explore general data reduction techniques and investigate new hybrid classical-quantum algorithms.

Published in Blog
1 / 3

私たちにどんなお手伝いができるでしょうか?
以下のお問合せフォームからご連絡ください。

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co
close