Menu

Quantum-computing related developments

On this page we post about interesting quantum-computing related research and news which we are following.

Small quantum computers and large classical data sets

Small quantum computers and large classical data sets

Quantum machine learning

Quantum computers are envisioned to have significantly higher computational capabilities compared to their classical counterparts, especially for optimization and machine learning problems that involve a large classical data set. However, existing quantum algorithms use the trivial methods of turning large classical datasets into either quantum oracles or quantum states which are so expensive that negate any possible quantum advantage. Such quantum algorithms focus at problems in which classical runtime scales rapidly with the input size, perhaps exponentially. To be able to achieve quantum speedup with algorithms like Grover search, a “quantum RAM” is proposed, which is a large classical memory that can be queried in superposition. Although quantum RAMs do not yet exist and creating one might encounter the same challenges that quantum computer hardware faces, it has the potential to provide significant speedup to applications like the k-means clustering, logistical regression, zero-sum games and boosting.

This paper introduces hybrid classical-quantum algorithms for problems involving a large classical data set X and a space of models Y such that a quantum computer has superposition access to Y but not X. Then a data reduction technique is used to construct a weighted subset of X called a coreset that yields approximately the same loss for each model. The coreset can be constructed either by a classical computer or by the combination of classical – quantum computer by utilization of quantum measurements.

The main message of this work is that in order to avoid losing quantum speedup for ‘big-data’ applications, techniques such as data reduction are required, so that the time for loading and storing the data set is limited. Also, non-fault tolerant quantum algorithms should be designed in a way that does not require an excessive amount of gates, so that the algorithm can run before qubits lose their coherence and invalidate the result. The goal of the paper is to draw attention to problems that arise from such actions like testing for quantum advantage when data reduction is used, explore general data reduction techniques and investigate new hybrid classical-quantum algorithms.

Quantum-assisted unsupervised learning (cluster analysis)

Quantum-assisted unsupervised learning (cluster analysis)

Quantum machine learning

Clustering is a form of unsupervised machine learning, where instances are organized into groups whose members share similarities. The assignments are, in contrast to classification, not known a priori, but generated by the algorithm. In this paper, Neukart et al.  present an algorithm for quantum-assisted cluster analysis (QACA) that makes use of the topological properties of a D-Wave 2000Q quantum processing unit (QPU). They explain how the problem can be expressed as a quadratic unconstrained binary optimization (QUBO) problem, and show that the introduced quantum-assisted clustering algorithm is, regarding accuracy, equivalent to commonly used classical clustering algorithms.

Classical-quantum hybrid algorithm for machine learning with NISQ devices

Classical-quantum hybrid algorithm for machine learning with NISQ devices

Quantum machine learning

Quantum machine learning (QML) algorithms based on the Harrow-Hassidim- Lloyd (HHL) algorithm rely on quantum phase estimation which requires high circuit-depth. To allow QML on current noisy intermediate scale quantum (NISQ) devices classical-quantum hybrid algorithms have been suggested applying low-depth circuits like quantum variational eigensolvers and quantum approximate optimization. Such hybrid algorithms typically divide the ML problem into two parts, each part to be performed either classically or on a quantum-computer. In this paper, Mitarai et al. present a new hybrid framework, called quantum circuit learning (QCL), which is easily realizable on current NISQ devices. Under QCL a circuit learns by providing input data, while iteratively tuning the circuit parameters to give the desired output. They show that QCL is able to learn nonlinear functions and perform simple classification tasks. They also show that a 6-qubit circuit is capable of learning dynamics of a 10-spin system with a fully connected Ising Hamiltonian, implying that QCL could be well suited for learning complex many-body systems.

Quantum circuit design for training perceptron models

Quantum circuit design for training perceptron models

Quantum machine learning

The perceptron algorithm dates back to the late 1950s and is an algorithm for supervised learning of binary classifiers. In a 2016 paper, Wiebe et al. proposed a quantum algorithm (based on Grover’s quantum-search approach), which can quadratically speed-up the training of a perceptron. In this paper, Zheng et al. describe their design for a quantum-circuit to implement the training-algorithm of Wiebe et al. They also analyze the resource requirements (qubits and gates) and demonstrate the feasibility of their quantum-circuit by testing it on the ibmqx5 (a 16 qubit universal gate quantum processor developed by IBM)

Real-time quantum change point detection

Real-time quantum change point detection

Quantum machine learning

Change point detection is a vast branch of statistical analysis developing techniques for uncovering abrupt changes in the underlying probability distribution of streaming data. This can be done off-line (using time-series data) or online (processing data sequentially). The latter enables real-time decision making, require less memory and is most relevant in machine learning. In this paper, Sentis et al. discuss online detection strategies for identifying a change point in a stream of quantum particles allegedly prepared in identical states. They show that the identification of the change point can be done without error via sequential local measurements, requiring only one classical bit of memory between subsequent measurements.

Quantum-trained Boltzmann machine to forecast election results

Quantum-trained Boltzmann machine to forecast election results

Quantum machine learning

In the 2016 US presidential elections, many of the professional polling groups had overestimated the probability of a Clinton victory. Multiple post-election analyses concluded that a leading cause of error in their forecast models was a lack of correlation between predictions for individual states. Uncorrelated models, though much simpler to build and train, cannot capture the more complex behavior of a fully-connected system. Accurate, reliable sampling from fully-connected graphs with arbitrary correlations quickly becomes classically intractable as the graph size grows. In this paper, Henderson et al. show an initial implementation of quantum-trained Boltzmann machine used for sampling from correlated systems. They show that such a quantum-trained machine is able to generate election forecasts with similar structural properties and outcomes as a best in class modeling group.

Unsupervised learning with Rigetti 19Q

Unsupervised learning with Rigetti 19Q

Quantum machine learning

Practical applications for current noise and small quantum-computing hardware, has focused mostly on short-depth parameterized quantum circuits used as a subroutine embedded in a larger classical optimization loop. In this paper, Otterbach et al. describe experiments with unsupervised machine learning (specifically clustering), which they translate into a combinatorial optimization problem solved by the quantum approximate optimization algorithm (QAOA) running on the Rigetti 19Q (a 19 qubit gate-based processor). They show that their implementation finds optimal solution for this task even with relatively noisy gates.

Quantum-enhanced reinforced learning with super-polynomial and exponential speedup

Quantum-enhanced reinforced learning with super-polynomial and exponential speedup

Quantum machine learning

For (un-)supervised learning, with applications in data-mining, prediction and classification, already quite a few quantum algorithms have been developed showing potential for (super-) polynomial speed-ups. Less is known about the benefits quantum can bring to reinforcement learning (RL), which has applications in a.o. AI and autonomous driving. In RL  a learning-agent perceives (aspects of) the states of a task environment, and influences subsequent states by performing actions. Certain state-action-state transitions are rewarding, and successful learning agents learn optimal behavior. In this paper, Dunjko et al. construct quantum-enhanced reinforcement-learners, which learn super-polynomially, and even exponentially faster than any classical reinforcement learning model.

Generalized quantum reinforcement learning protocol

Generalized quantum reinforcement learning protocol

Quantum machine learning

Reinforcement learning6 differs from supervised and unsupervised learning in that it takes into account a scalar parameter (reward) to evaluate the input-output relation in a trial and error way. In this paper, Cardenas-Lopez et al. propose a protocol to perform generalized quantum reinforcement learning. They consider diverse possible scenarios for an agent, an environment, and a register that connects them, involving multi-qubit and multi-level systems, as well as open-system dynamics and they propose possible implementations of this protocol in trapped ions and superconducting circuits. 

Survey of quantum machine learning

Survey of quantum machine learning

Quantum machine learning

In this paper, Dunjko et al. provide a comprehensive review of the current (Sept 2017) state of quantum machine learning, including quantum providing speed-ups or enhancing classical ML and classical classical ML being used for quantum-control or to design quantum-circuits