Displaying items by tag: Machine learning

Qu&Co comments on this publication:

Clustering is a form of unsupervised machine learning, where instances are organized into groups whose members share similarities. The assignments are, in contrast to classification, not known a priori, but generated by the algorithm. In this paper, Neukart et al.  present an algorithm for quantum-assisted cluster analysis (QACA) that makes use of the topological properties of a D-Wave 2000Q quantum processing unit (QPU). They explain how the problem can be expressed as a quadratic unconstrained binary optimization (QUBO) problem, and show that the introduced quantum-assisted clustering algorithm is, regarding accuracy, equivalent to commonly used classical clustering algorithms.

Published in Blog

Qu&Co comments on this publication:

In this paper, Dunjko et al. provide a comprehensive review of the current (Sept 2017) state of quantum machine learning, including quantum providing speed-ups or enhancing classical ML and classical classical ML being used for quantum-control or to design quantum-circuits

Published in Blog
Tagged under
02 March 2018

Quantum circuit learning

Qu&Co comments on this publication:

Quantum machine learning (QML) algorithms based on the Harrow-Hassidim- Lloyd (HHL) algorithm rely on quantum phase estimation which requires high circuit-depth. To allow QML on current noisy intermediate scale quantum (NISQ) devices classical-quantum hybrid algorithms have been suggested applying low-depth circuits like quantum variational eigensolvers and quantum approximate optimization. Such hybrid algorithms typically divide the ML problem into two parts, each part to be performed either classically or on a quantum-computer. In this paper, Mitarai et al. present a new hybrid framework, called quantum circuit learning (QCL), which is easily realizable on current NISQ devices. Under QCL a circuit learns by providing input data, while iteratively tuning the circuit parameters to give the desired output. They show that QCL is able to learn nonlinear functions and perform simple classification tasks. They also show that a 6-qubit circuit is capable of learning dynamics of a 10-spin system with a fully connected Ising Hamiltonian, implying that QCL could be well suited for learning complex many-body systems.

Published in Blog

Qu&Co comments on this publication:

The perceptron algorithm dates back to the late 1950s and is an algorithm for supervised learning of binary classifiers. In a 2016 paper, Wiebe et al. proposed a quantum algorithm (based on Grover’s quantum-search approach), which can quadratically speed-up the training of a perceptron. In this paper, Zheng et al. describe their design for a quantum-circuit to implement the training-algorithm of Wiebe et al. They also analyze the resource requirements (qubits and gates) and demonstrate the feasibility of their quantum-circuit by testing it on the ibmqx5 (a 16 qubit universal gate quantum processor developed by IBM)

Published in Blog
22 September 2017

Quantum reinforcement learning

Qu&Co comments on this publication:

Reinforcement learning6 differs from supervised and unsupervised learning in that it takes into account a scalar parameter (reward) to evaluate the input-output relation in a trial and error way. In this paper, Cardenas-Lopez et al. propose a protocol to perform generalized quantum reinforcement learning. They consider diverse possible scenarios for an agent, an environment, and a register that connects them, involving multi-qubit and multi-level systems, as well as open-system dynamics and they propose possible implementations of this protocol in trapped ions and superconducting circuits. 

Published in Blog
Tagged under

Qu&Co comments on this publication:

For (un-)supervised learning, with applications in data-mining, prediction and classification, already quite a few quantum algorithms have been developed showing potential for (super-) polynomial speed-ups. Less is known about the benefits quantum can bring to reinforcement learning (RL), which has applications in a.o. AI and autonomous driving. In RL  a learning-agent perceives (aspects of) the states of a task environment, and influences subsequent states by performing actions. Certain state-action-state transitions are rewarding, and successful learning agents learn optimal behavior. In this paper, Dunjko et al. construct quantum-enhanced reinforcement-learners, which learn super-polynomially, and even exponentially faster than any classical reinforcement learning model.

Published in Blog
Tagged under
22 September 2016

Quantum recommendation systems

Qu&Co comments on this publication:

A recommendation engine uses the past purchases or ratings to construct a (partial) preference matrix, which is used to provide personalized recommendations to individual users. In this paper, Kerenidis et al. present a quantum recommendation system which updates the partial preference matrix each time data comes in and can, based on such matrix, provides a recommendation in time polylog to the matrix-dimension, exponentially faster than classical recommendation systems.

Published in Blog
Tagged under

Qu&Co comments on this publication:

Practical applications for current noise and small quantum-computing hardware, has focused mostly on short-depth parameterized quantum circuits used as a subroutine embedded in a larger classical optimization loop. In this paper, Otterbach et al. describe experiments with unsupervised machine learning (specifically clustering), which they translate into a combinatorial optimization problem solved by the quantum approximate optimization algorithm (QAOA) running on the Rigetti 19Q (a 19 qubit gate-based processor). They show that their implementation finds optimal solution for this task even with relatively noisy gates.

Published in Blog

Qu&Co comments on this publication:

In the 2016 US presidential elections, many of the professional polling groups had overestimated the probability of a Clinton victory. Multiple post-election analyses concluded that a leading cause of error in their forecast models was a lack of correlation between predictions for individual states. Uncorrelated models, though much simpler to build and train, cannot capture the more complex behavior of a fully-connected system. Accurate, reliable sampling from fully-connected graphs with arbitrary correlations quickly becomes classically intractable as the graph size grows. In this paper, Henderson et al. show an initial implementation of quantum-trained Boltzmann machine used for sampling from correlated systems. They show that such a quantum-trained machine is able to generate election forecasts with similar structural properties and outcomes as a best in class modeling group.

Published in Blog

Qu&Co comments on this publication:

Change point detection is a vast branch of statistical analysis developing techniques for uncovering abrupt changes in the underlying probability distribution of streaming data. This can be done off-line (using time-series data) or online (processing data sequentially). The latter enables real-time decision making, require less memory and is most relevant in machine learning. In this paper, Sentis et al. discuss online detection strategies for identifying a change point in a stream of quantum particles allegedly prepared in identical states. They show that the identification of the change point can be done without error via sequential local measurements, requiring only one classical bit of memory between subsequent measurements.

Published in Blog
Page 2 of 3

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close