The promise of quantum computers is that certain computational tasks might be executed exponentially faster on a quantum processor than on a classical processor
. A fundamental challenge is to build ...a high-fidelity processor capable of running quantum algorithms in an exponentially large computational space. Here we report the use of a processor with programmable superconducting qubits
to create quantum states on 53 qubits, corresponding to a computational state-space of dimension 2
(about 10
). Measurements from repeated experiments sample the resulting probability distribution, which we verify using classical simulations. Our Sycamore processor takes about 200 seconds to sample one instance of a quantum circuit a million times-our benchmarks currently indicate that the equivalent task for a state-of-the-art classical supercomputer would take approximately 10,000 years. This dramatic increase in speed compared to all known classical algorithms is an experimental realization of quantum supremacy
for this specific computational task, heralding a much-anticipated computing paradigm.
Twelve-qubit quantum computing for chemistry
Accurate electronic structure calculations are considered one of the most anticipated applications of quantum computing that will revolutionize ...theoretical chemistry and other related fields. Using the Google Sycamore quantum processor, Google AI Quantum and collaborators performed a variational quantum eigensolver (VQE) simulation of two intermediate-scale chemistry problems: the binding energy of hydrogen chains (as large as H
12
) and the isomerization mechanism of diazene (see the Perspective by Yuan). The simulations were performed on up to 12 qubits, involving up to 72 two-qubit gates, and show that it is possible to achieve chemical accuracy when VQE is combined with error mitigation strategies. The key building blocks of the proposed VQE algorithm are potentially scalable to larger systems that cannot be simulated classically.
Science
, this issue p.
1084
; see also p.
1054
Accurate quantum simulations of chemistry are performed using up to 12 superconducting qubits and 72 two-qubit gates.
The simulation of fermionic systems is among the most anticipated applications of quantum computing. We performed several quantum simulations of chemistry with up to one dozen qubits, including modeling the isomerization mechanism of diazene. We also demonstrated error-mitigation strategies based on
N
-representability that dramatically improve the effective fidelity of our experiments. Our parameterized ansatz circuits realized the Givens rotation approach to noninteracting fermion evolution, which we variationally optimized to prepare the Hartree-Fock wave function. This ubiquitous algorithmic primitive is classically tractable to simulate yet still generates highly entangled states over the computational basis, which allowed us to assess the performance of our hardware and establish a foundation for scaling up correlated quantum chemistry simulations.
Scalable quantum computing can become a reality with error correction, provided that coherent qubits can be constructed in large arrays1,2. The key premise is that physical errors can remain both ...small and sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, impacts from cosmic rays and latent radioactivity violate these assumptions. An impinging particle can ionize the substrate and induce a burst of quasiparticles that destroys qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices3–5, but the effect on large-scale algorithms and error correction remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales necessary for error correction. Here, we use space- and time-resolved measurements of a large-scale quantum processor to identify bursts of quasiparticles produced by high-energy rays. We track the events from their initial localized impact as they spread, simultaneously and severely limiting the energy coherence of all qubits and causing chip-wide failure. Our results provide direct insights into the impact of these damaging error bursts and highlight the necessity of mitigation to enable quantum computing to scale.Cosmic rays flying through superconducting quantum devices create bursts of excitations that destroy qubit coherence. Rapid, spatially resolved measurements of qubit error rates make it possible to observe the evolution of the bursts across a chip.
A universal fault-tolerant quantum computer will require large-scale control systems that can realize all the waveforms required to implement a gateset that is universal for quantum computing. ...Optimization of such a system, which must be precise and extensible, is an open research challenge. Here, we present a cryogenic quantum control integrated circuit (IC) that is able to control all the necessary degrees of freedom of a two-qubit subcircuit of a superconducting quantum processor. Specifically, the IC contains a pair of 4-8-GHz RF pulse generators for <inline-formula> <tex-math notation="LaTeX">XY </tex-math></inline-formula> control, three baseband current generators for qubit and coupler frequency control, and a digital controller that includes a sequencer for gate sequence playback. After motivating the architecture, we describe the circuit-level implementation details and present experimental results. Using standard benchmarking techniques, we show that the cryogenic CMOS (cryo-CMOS) IC is able to execute the components of a gateset that is universal for quantum computing while achieving single-qubit <inline-formula> <tex-math notation="LaTeX">XY </tex-math></inline-formula> and <inline-formula> <tex-math notation="LaTeX">Z </tex-math></inline-formula> average gate error rates of 0.17%-0.36% and 0.14%-0.17%, respectively, as well as two-qubit average cross-entropy benchmarking (XEB) cycle error rates of 1.2%. These error rates, which were achieved while dissipating just 4 mW/qubit, are comparable to the measured error rates obtained using baseline room-temperature electronics.
Design and Characterization of a Yoo, Juhwan; Chen, Zijun; Arute, Frank ...
IEEE journal of solid-state circuits,
01/2023, Letnik:
58, Številka:
11
Journal Article
Recenzirano
A universal fault-tolerant quantum computer will require large-scale control systems that can realize all the waveforms required to implement a gateset that is universal for quantum computing. ...Optimization of such a system, which must be precise and extensible, is an open research challenge. Here, we present a cryogenic quantum control integrated circuit (IC) that is able to control all the necessary degrees of freedom of a two-qubit subcircuit of a superconducting quantum processor. Specifically, the IC contains a pair of 4–8-GHz RF pulse generators for Formula Omitted control, three baseband current generators for qubit and coupler frequency control, and a digital controller that includes a sequencer for gate sequence playback. After motivating the architecture, we describe the circuit-level implementation details and present experimental results. Using standard benchmarking techniques, we show that the cryogenic CMOS (cryo-CMOS) IC is able to execute the components of a gateset that is universal for quantum computing while achieving single-qubit Formula Omitted and Formula Omitted average gate error rates of 0.17%–0.36% and 0.14%–0.17%, respectively, as well as two-qubit average cross-entropy benchmarking (XEB) cycle error rates of 1.2%. These error rates, which were achieved while dissipating just 4 mW/qubit, are comparable to the measured error rates obtained using baseline room-temperature electronics.
While large-scale fault-tolerant quantum computers promise to enable the solution to certain classes of problems for which no other efficient approach is known, such a machine is believed to require ...over a million performant qubits. Scaling today's 0(100) qubit superconducting (SC) quantum computers to this extent while also improving performance carries many daunting challenges, including control of such a large quantum processor (QP). Integrating the control electronics at an intermediate temperature stage within the cryostat is an attractive option, e.g., due to the inherent thermal stability of the cryogenic environment and the feasibility of connecting to the QP via dense low-loss/high-thermal-isolation SC lines 1, 2. Several cryo-CMOS quantum controllers have been reported, with examples used to control spin 3 and transmon 1, 2 qubits. To date, IC-based quantum control experiments have focused on resonant RF control, but baseband signals are often central to the execution of gates. Here, we report the design and system characterization of a cryo-CMOS IC for generating both the RF and baseband signals required for full control of a SC QP unit-cell, and show its ability to implement the components of a high-fidelity gate set that is universal for quantum computing.
Scalable quantum computing can become a reality with error correction, provided coherent qubits can be constructed in large arrays. The key premise is that physical errors can remain both small and ...sufficiently uncorrelated as devices scale, so that logical error rates can be exponentially suppressed. However, energetic impacts from cosmic rays and latent radioactivity violate both of these assumptions. An impinging particle ionizes the substrate, radiating high energy phonons that induce a burst of quasiparticles, destroying qubit coherence throughout the device. High-energy radiation has been identified as a source of error in pilot superconducting quantum devices, but lacking a measurement technique able to resolve a single event in detail, the effect on large scale algorithms and error correction in particular remains an open question. Elucidating the physics involved requires operating large numbers of qubits at the same rapid timescales as in error correction, exposing the event's evolution in time and spread in space. Here, we directly observe high-energy rays impacting a large-scale quantum processor. We introduce a rapid space and time-multiplexed measurement method and identify large bursts of quasiparticles that simultaneously and severely limit the energy coherence of all qubits, causing chip-wide failure. We track the events from their initial localised impact to high error rates across the chip. Our results provide direct insights into the scale and dynamics of these damaging error bursts in large-scale devices, and highlight the necessity of mitigation to enable quantum computing to scale.
Realizing the potential of quantum computing will require achieving sufficiently low logical error rates. Many applications call for error rates in the \(10^{-15}\) regime, but state-of-the-art ...quantum platforms typically have physical error rates near \(10^{-3}\). Quantum error correction (QEC) promises to bridge this divide by distributing quantum logical information across many physical qubits so that errors can be detected and corrected. Logical errors are then exponentially suppressed as the number of physical qubits grows, provided that the physical error rates are below a certain threshold. QEC also requires that the errors are local and that performance is maintained over many rounds of error correction, two major outstanding experimental challenges. Here, we implement 1D repetition codes embedded in a 2D grid of superconducting qubits which demonstrate exponential suppression of bit or phase-flip errors, reducing logical error per round by more than \(100\times\) when increasing the number of qubits from 5 to 21. Crucially, this error suppression is stable over 50 rounds of error correction. We also introduce a method for analyzing error correlations with high precision, and characterize the locality of errors in a device performing QEC for the first time. Finally, we perform error detection using a small 2D surface code logical qubit on the same device, and show that the results from both 1D and 2D codes agree with numerical simulations using a simple depolarizing error model. These findings demonstrate that superconducting qubits are on a viable path towards fault tolerant quantum computing.
We demonstrate the application of the Google Sycamore superconducting qubit quantum processor to combinatorial optimization problems with the quantum approximate optimization algorithm (QAOA). Like ...past QAOA experiments, we study performance for problems defined on the (planar) connectivity graph of our hardware; however, we also apply the QAOA to the Sherrington-Kirkpatrick model and MaxCut, both high dimensional graph problems for which the QAOA requires significant compilation. Experimental scans of the QAOA energy landscape show good agreement with theory across even the largest instances studied (23 qubits) and we are able to perform variational optimization successfully. For problems defined on our hardware graph we obtain an approximation ratio that is independent of problem size and observe, for the first time, that performance increases with circuit depth. For problems requiring compilation, performance decreases with problem size but still provides an advantage over random guessing for circuits involving several thousand gates. This behavior highlights the challenge of using near-term quantum computers to optimize problems on graphs differing from hardware connectivity. As these graphs are more representative of real world instances, our results advocate for more emphasis on such problems in the developing tradition of using the QAOA as a holistic, device-level benchmark of quantum processors.
Interaction in quantum systems can spread initially localized quantum information into the many degrees of freedom of the entire system. Understanding this process, known as quantum scrambling, is ...the key to resolving various conundrums in physics. Here, by measuring the time-dependent evolution and fluctuation of out-of-time-order correlators, we experimentally investigate the dynamics of quantum scrambling on a 53-qubit quantum processor. We engineer quantum circuits that distinguish the two mechanisms associated with quantum scrambling, operator spreading and operator entanglement, and experimentally observe their respective signatures. We show that while operator spreading is captured by an efficient classical model, operator entanglement requires exponentially scaled computational resources to simulate. These results open the path to studying complex and practically relevant physical observables with near-term quantum processors.