Displaying items by tag: NISQ algorithms

Qu&Co comments on this publication:

In this paper, Matthew Hastings presents a quantum algorithm to exactly solve certain problems in combinatorial optimization, including weighted MAX-2-SAT.  While the time required is still exponential, the algorithm provably outperforms Grover's algorithm assuming a mild condition on the number of low energy states of the target Hamiltonian.

Published in Blog

Qu&Co comments on this publication:

Quantum computers are envisioned to have significantly higher computational capabilities compared to their classical counterparts, especially for optimization and machine learning problems that involve a large classical data set. However, existing quantum algorithms use the trivial methods of turning large classical datasets into either quantum oracles or quantum states which are so expensive that negate any possible quantum advantage. Such quantum algorithms focus at problems in which classical runtime scales rapidly with the input size, perhaps exponentially. To be able to achieve quantum speedup with algorithms like Grover search, a “quantum RAM” is proposed, which is a large classical memory that can be queried in superposition. Although quantum RAMs do not yet exist and creating one might encounter the same challenges that quantum computer hardware faces, it has the potential to provide significant speedup to applications like the k-means clustering, logistical regression, zero-sum games and boosting.

This paper introduces hybrid classical-quantum algorithms for problems involving a large classical data set X and a space of models Y such that a quantum computer has superposition access to Y but not X. Then a data reduction technique is used to construct a weighted subset of X called a coreset that yields approximately the same loss for each model. The coreset can be constructed either by a classical computer or by the combination of classical – quantum computer by utilization of quantum measurements.

The main message of this work is that in order to avoid losing quantum speedup for ‘big-data’ applications, techniques such as data reduction are required, so that the time for loading and storing the data set is limited. Also, non-fault tolerant quantum algorithms should be designed in a way that does not require an excessive amount of gates, so that the algorithm can run before qubits lose their coherence and invalidate the result. The goal of the paper is to draw attention to problems that arise from such actions like testing for quantum advantage when data reduction is used, explore general data reduction techniques and investigate new hybrid classical-quantum algorithms.

Published in Blog

Qu&Co comments on this publication:

Clustering is a form of unsupervised machine learning, where instances are organized into groups whose members share similarities. The assignments are, in contrast to classification, not known a priori, but generated by the algorithm. In this paper, Neukart et al.  present an algorithm for quantum-assisted cluster analysis (QACA) that makes use of the topological properties of a D-Wave 2000Q quantum processing unit (QPU). They explain how the problem can be expressed as a quadratic unconstrained binary optimization (QUBO) problem, and show that the introduced quantum-assisted clustering algorithm is, regarding accuracy, equivalent to commonly used classical clustering algorithms.

Published in Blog
02 March 2018

Quantum circuit learning

Qu&Co comments on this publication:

Quantum machine learning (QML) algorithms based on the Harrow-Hassidim- Lloyd (HHL) algorithm rely on quantum phase estimation which requires high circuit-depth. To allow QML on current noisy intermediate scale quantum (NISQ) devices classical-quantum hybrid algorithms have been suggested applying low-depth circuits like quantum variational eigensolvers and quantum approximate optimization. Such hybrid algorithms typically divide the ML problem into two parts, each part to be performed either classically or on a quantum-computer. In this paper, Mitarai et al. present a new hybrid framework, called quantum circuit learning (QCL), which is easily realizable on current NISQ devices. Under QCL a circuit learns by providing input data, while iteratively tuning the circuit parameters to give the desired output. They show that QCL is able to learn nonlinear functions and perform simple classification tasks. They also show that a 6-qubit circuit is capable of learning dynamics of a 10-spin system with a fully connected Ising Hamiltonian, implying that QCL could be well suited for learning complex many-body systems.

Published in Blog

Qu&Co comments on this publication:

The perceptron algorithm dates back to the late 1950s and is an algorithm for supervised learning of binary classifiers. In a 2016 paper, Wiebe et al. proposed a quantum algorithm (based on Grover’s quantum-search approach), which can quadratically speed-up the training of a perceptron. In this paper, Zheng et al. describe their design for a quantum-circuit to implement the training-algorithm of Wiebe et al. They also analyze the resource requirements (qubits and gates) and demonstrate the feasibility of their quantum-circuit by testing it on the ibmqx5 (a 16 qubit universal gate quantum processor developed by IBM)

Published in Blog

Qu&Co comments on this publication:

Practical applications for current noise and small quantum-computing hardware, has focused mostly on short-depth parameterized quantum circuits used as a subroutine embedded in a larger classical optimization loop. In this paper, Otterbach et al. describe experiments with unsupervised machine learning (specifically clustering), which they translate into a combinatorial optimization problem solved by the quantum approximate optimization algorithm (QAOA) running on the Rigetti 19Q (a 19 qubit gate-based processor). They show that their implementation finds optimal solution for this task even with relatively noisy gates.

Published in Blog

Qu&Co comments on this publication:

Change point detection is a vast branch of statistical analysis developing techniques for uncovering abrupt changes in the underlying probability distribution of streaming data. This can be done off-line (using time-series data) or online (processing data sequentially). The latter enables real-time decision making, require less memory and is most relevant in machine learning. In this paper, Sentis et al. discuss online detection strategies for identifying a change point in a stream of quantum particles allegedly prepared in identical states. They show that the identification of the change point can be done without error via sequential local measurements, requiring only one classical bit of memory between subsequent measurements.

Published in Blog

Qu&Co comments on this publication:

Ever since the publication of Shor’s algorithm in 1994, efficient integer factorization has been a key application area envisioned for quantum-computers, with important implications for the security of some of the most used cryptosystems. Because Shor’s algorithm requires a large-scale fault-tolerant quantum-processor, RSA-3072 encryption was so-far believed to remain safe until 2030. However, in recent years hybrid (classical-quantum) alternatives have been developed for many important quantum-algorithms. Such hybrid algorithms can be run on current-day noisy and small-scale quantum-processors. In this paper Eric Anschuetz et al. describe such a hybrid alternative for Shor’s algorithm, which they call variational quantum factoring (VQF). If some pre-processing is applied VQF scales with O(n), n being the number of bits of the integer being factored. If VQF can be optimized to scale well up to 3000+ qubits, which is very challenging, but not completely unthinkable, and if we assume the number of physical qubits in quantum-processors doubles every year, quantum-processors could have sufficiently high qubit count to break RSA-3072 as early as 2025. However, as VQF relies on a quantum-optimization algorithm (QAOA) it seems unlikely that the speed-up of VQF could be more than quadratic, which means that the runtime for breaking RSA-3072 could very well be prohibitively long and that doubling the RSA-6144 (double the key-length) would again be just  as safe as RSA-3072 is currently.

Published in Blog

Qu&Co comments on this publication:

Quantum computers were initially proposed to efficiently simulate quantum mechanical systems with an exponential speedup compared to classical computers. We are currently in the noisy intermediate-scale quantum (NISQ) era, which means quantum chips still have a small number of qubits. This prohibits straightforward quantum simulations of realistic molecules and materials, whose description requires hundreds of atoms and thousands to millions of degrees of freedom to represent the electronic wavefunctions. One research direction which attempts to bypass this restriction is the development of hybrid quantum-classical methods where the quantum computation is restricted to a small portion of the system.

In this paper, a quantum embedding theory is proposed for the calculation of strongly-correlated electronic states of active regions, while the rest of the system is described with density functional theory (DFT). DFT (and its various approximations) has been extremely successful in predicting numerous properties of solids, liquids and molecules, and in providing key interpretations to a variety of experimental results, but is often inadequate to describe strongly-correlated electronic state. The novel theory proposed in this paper is built on DFT and does not require the explicit evaluation of virtual electronic states, thus making the method scalable to materials with thousands of electrons. Also, it includes the effect of exchange-correlation interactions of the environment on active regions, thus going beyond commonly adopted approximations in conventional DFT.

The proposed quantum embedding theory utilizes a classical and a quantum algorithm to solve the Hamiltonian that describes the problem and yields results in good agreement with existing experimental measurements and still-tractable computations on classical computing architectures. The theory is tested in various solid-state quantum information technologies, which exhibit strongly-correlated electronic states. In this way, the authors show how the quantum-classical hybrid approach incorporating DFT enables the study of large-scale material systems while adding the strongly-correlated dynamics analysis which the quantum simulation algorithm can provide.

Published in Blog

Qu&Co comments on this publication:

In this arXiv submission by Qu & Co and Covestro, a well-known approximation in classical computational methods for quantum chemistry is applied to a quantum computing scheme for simulating molecular chemistry efficiently on near-term quantum devices. The restricted mapping allows for a polynomial reduction in both the quantum circuit depth and the total number of measurements required, as compared to the conventional variational approaches based on near-term quantum simulation of molecular chemistry, such as UCCSD. This enables faster runtime convergence of the variational algorithm to a potentially higher accuracy by using a larger basis set allowed by the restricted mapping. The latter is shown via an example simulation of the disassociation curve of lithium hydride. These results open up a new direction for efficient near-term quantum chemistry simulation, as well as decreasing the effective quantum resource requirements for future fault-tolerant quantum computing schemes.

Published in Blog
Page 2 of 3

What's Interesting?

How can we help you?

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Invalid Input

Copyright © Qu & Co BV
close