Certainly, the achievements reported in a soon-to-be-published paper in Nature detail progress on a key challenge that has long constrained the field: quantum error correction (QEC) below threshold. This progress, however, arrives accompanied by caveats, complexities, and the understanding that many years and substantial refinements remain before practical quantum devices become a reality.
Central to Google’s claims is the demonstration of a large-scale surface code memory that reduces logical errors as the system’s size increases. The research team reports that their processor achieves a 101-qubit distance-7 surface code exhibiting a logical error rate of roughly 0.143% per cycle—a result that marks a “logical memory” surpassing the performance of its best constituent physical qubits by more than a factor of two. The results are among the first to show at scale the theoretical principles that underpin fault-tolerant quantum computing.
As the preprint notes, “The logical error rate of our larger quantum memory is suppressed by a factor of Λ = 2.14 ± 0.02,” confirming that more qubits can lead to fewer logical errors under certain conditions.
As is often the case, more research is needed. The processor demonstrates below-threshold performance for a specifically selected set of operations and architectures. As Professor Alan Woodward from Surrey University notes in BBC’s coverage, one must “be careful not to compare apples and oranges.” Woodward goes on to say that Google had chosen a benchmark problem “tailor-made for a quantum computer” rather than demonstrating “a universal speeding up when compared to classical computers.”
Time will tell whether the development represents the “best quantum processor built to date” as Hartmut Neven, leader of Google’s Quantum AI lab, put it. Or if the announcement is more of a measured step forward.
Beyond the raw numbers, the Willow processor’s system-level improvements shine a light on what it will take to build practical quantum machines. To achieve the reported performance, Google’s team used a distance-7 surface code memory composed of 49 data qubits, 48 measure qubits, and 4 additional leakage removal qubits, stabilizing the system against a variety of error sources. The researchers highlight that detection probabilities increase with code size owing to finite-size effects and parasitic qubit couplings. Even though the logical qubit surpasses its physical constituents, the hardware remains far from the error rates required for extensive fault-tolerant computation. Current state-of-the-art gate fidelities, on the order of 99.9%, still pale in comparison to the <10^-10 error rates envisioned for many advanced quantum algorithms.
Moreover, the study identifies rare but significant error bursts and correlated errors that can cause logical failures roughly once every few billion cycles. Additional challenges include integrating real-time decoding strategies that process error-correction information on millisecond—or even microsecond—timescales. The research team’s paper stresses that “fully fault-tolerant quantum computing requires error rates well below those displayed by Willow.”
Tell Us What You Think!