Realization of a scalable Shor algorithm Monz, Thomas; Nigg, Daniel; Martinez, Esteban A. ...
Science (American Association for the Advancement of Science),
03/2016, Letnik:
351, Številka:
6277
Journal Article
Recenzirano
Odprti dostop
Certain algorithms for quantum computers are able to outperform their classical counterparts. In 1994, Peter Shor came up with a quantum algorithm that calculates the prime factors of a large number ...vastly more efficiently than a classical computer. For general scalability of such algorithms, hardware, quantum error correction, and the algorithmic realization itself need to be extensible. Here we present the realization of a scalable Shor algorithm, as proposed by Kitaev. We factor the number 15 by effectively employing and controlling seven qubits and four "cache qubits" and by implementing generalized arithmetic operations, known as modular multipliers. This algorithm has been realized scalably within an ion-trap quantum computer and returns the correct factors with a confidence level exceeding 99%.
The creation of composite quantum gates that implement quantum response functions U^(θ) dependent on some parameter of interest θ is often more of an art than a science. Through inspired design, a ...sequence of L primitive gates also depending on θ can engineer a highly nontrivial U^(θ) that enables myriad precision metrology, spectroscopy, and control techniques. However, discovering new, useful examples of U^(θ) requires great intuition to perceive the possibilities, and often brute force to find optimal implementations. We present a systematic and efficient methodology for composite gate design of arbitrary length, where phase-controlled primitive gates all rotating by θ act on a single spin. We fully characterize the realizable family of U^(θ) , provide an efficient algorithm that decomposes a choice of U^(θ) into its shortest sequence of gates, and show how to efficiently choose an achievable U^(θ) that, for fixed L , is an optimal approximation to objective functions on its quadratures. A strong connection is forged with classical discrete-time signal processing, allowing us to swiftly construct, as examples, compensated gates with optimal bandwidth that implement arbitrary single-spin rotations with subwavelength spatial selectivity.
It is an oft-cited fact that no quantum code can support a set of fault-tolerant logical gates that is both universal and transversal. This no-go theorem is generally responsible for the interest in ...alternative universality constructions including magic state distillation. Widely overlooked, however, is the possibility of nontransversal, yet still fault-tolerant, gates that work directly on small quantum codes. Here, we demonstrate precisely the existence of such gates. In particular, we show how the limits of nontransversality can be overcome by performing rounds of intermediate error correction to create logical gates on stabilizer codes that use no ancillas other than those required for syndrome measurement. Moreover, the logical gates we construct, the most prominent examples being Toffoli and controlled-controlled-Z , often complete universal gate sets on their codes. We detail such universal constructions for the smallest quantum codes, the 5-qubit and 7-qubit codes, and then proceed to generalize the approach. One remarkable result of this generalization is that any nondegenerate stabilizer code with a complete set of fault-tolerant single-qubit Clifford gates has a universal set of fault-tolerant gates. Another is the interaction of logical qubits across different stabilizer codes, which, for instance, implies a broadly applicable method of code switching.
The efficient simulation of quantum systems is a primary motivating factor for developing controllable quantum machines. For addressing systems with underlying bosonic structure, it is advantageous ...to utilize a naturally bosonic platform. Optical photons passing through linear networks may be configured to perform quantum simulation tasks, but the efficient preparation and detection of multiphoton quantum states of light in linear optical systems are challenging. Here, we experimentally implement a boson sampling protocol for simulating molecular vibronic spectra J. Huh et al., Nat. Photonics 9, 615 (2015) in a two-mode superconducting device. In addition to enacting the requisite set of Gaussian operations across both modes, we fulfill the scalability requirement by demonstrating, for the first time in any platform, a high-fidelity single-shot photon number resolving detection scheme capable of resolving up to 15 photons per mode. Furthermore, we exercise the capability of synthesizing non-Gaussian input states to simulate spectra of molecular ensembles in vibrational excited states. We show the reprogrammability of our implementation by extracting the spectra of photoelectron processes inH2O,O3,NO2, andSO2. The capabilities highlighted in this work establish the superconducting architecture as a promising platform for bosonic simulations, and by combining them with tools such as Kerr interactions and engineered dissipation, enable the simulation of a wider class of bosonic systems.
Algorithms such as quantum factoring and quantum search illustrate the great theoretical promise of quantum computers; but the practical implementation of such devices will require careful ...consideration of the minimum resource requirements, together with the development of procedures to overcome inevitable residual imperfections in physical systems. Many designs have been proposed, but none allow a large quantum computer to be built in the near future. Moreover, the known protocols for constructing reliable quantum computers from unreliable components can be complicated, often requiring many operations to produce a desired transformation. Here we show how a single technique-a generalization of quantum teleportation-reduces resource requirements for quantum computers and unifies known protocols for fault-tolerant quantum computation. We show that single quantum bit (qubit) operations, Bell-basis measurements and certain entangled quantum states such as Greenberger-Horne-Zeilinger (GHZ) states-all of which are within the reach of current technology-are sufficient to construct a universal quantum computer. We also present systematic constructions for an infinite class of reliable quantum gates that make the design of fault-tolerant quantum computers much more straightforward and methodical.
While quantum devices rely on interactions between constituent subsystems and with their environment to operate, native interactions alone often fail to deliver targeted performance. Coherent pulsed ...control provides the ability to tailor effective interactions, known as Hamiltonian engineering. We propose a Hamiltonian engineering method that maximizes desired interactions while mitigating deleterious ones by conducting a pulse sequence search using constrained optimization. The optimization formulation incorporates pulse sequence length and cardinality penalties consistent with linear or integer programming. We apply the general technique to magnetometry with solid state spin ensembles in which inhomogeneous interactions between sensing spins limit coherence. Defining figures of merit for broadband Ramsey magnetometry, we present novel pulse sequences which outperform known techniques for homonuclear spin decoupling in both spin-1/2 and spin-1 systems. When applied to nitrogen vacancy (NV) centers in diamond, this scheme partially preserves the Zeeman interaction while zeroing dipolar coupling between negatively charged NV− centers. Such a scheme is of interest for NV− magnetometers which have reached the NV−-NV− coupling limit. We discuss experimental implementation in NV ensembles, as well as applicability of the current approach to more general spin bath decoupling and superconducting qubit control.
At the heart of both lossy compression and clustering is a trade-off between the fidelity and size of the learned representation. Our goal is to map out and study the Pareto frontier that quantifies ...this trade-off. We focus on the optimization of the Deterministic Information Bottleneck (DIB) objective over the space of hard clusterings. To this end, we introduce the primal DIB problem, which we show results in a much richer frontier than its previously studied Lagrangian relaxation when optimized over discrete search spaces. We present an algorithm for mapping out the Pareto frontier of the primal DIB trade-off that is also applicable to other two-objective clustering problems. We study general properties of the Pareto frontier, and we give both analytic and numerical evidence for logarithmic sparsity of the frontier in general. We provide evidence that our algorithm has polynomial scaling despite the super-exponential search space, and additionally, we propose a modification to the algorithm that can be used where sampling noise is expected to be significant. Finally, we use our algorithm to map the DIB frontier of three different tasks: compressing the English alphabet, extracting informative color classes from natural images, and compressing a group theory-inspired dataset, revealing interesting features of frontier, and demonstrating how the structure of the frontier can be used for model selection with a focus on points previously hidden by the cloak of the convex hull.
The Information Bottleneck (IB) method provides an insightful and principled approach for balancing compression and prediction for representation learning. The IB objective I ( X ; Z ) − β I ( Y ; Z ...) employs a Lagrange multiplier β to tune this trade-off. However, in practice, not only is β chosen empirically without theoretical guidance, there is also a lack of theoretical understanding between β , learnability, the intrinsic nature of the dataset and model capacity. In this paper, we show that if β is improperly chosen, learning cannot happen—the trivial representation P ( Z | X ) = P ( Z ) becomes the global minimum of the IB objective. We show how this can be avoided, by identifying a sharp phase transition between the unlearnable and the learnable which arises as β is varied. This phase transition defines the concept of IB-Learnability. We prove several sufficient conditions for IB-Learnability, which provides theoretical guidance for choosing a good β . We further show that IB-learnability is determined by the largest confident, typical and imbalanced subset of the examples (the conspicuous subset), and discuss its relation with model capacity. We give practical algorithms to estimate the minimum β for a given dataset. We also empirically demonstrate our theoretical conditions with analyses of synthetic datasets, MNIST and CIFAR10.