Today, the U.S. Department of Energy (DOE) announced $21.4 million in funding for research in Quantum Information Science (QIS) related to both particle physics and fusion energy sciences.
“QIS holds great promise for tackling challenging questions in a wide range of disciplines,” said Under Secretary for Science Paul Dabbar. “This research will open up important new avenues of investigation in areas like artificial intelligence while helping keep American science on the cutting edge of the growing field of QIS.”
Funding of $12 million will be provided for 21 projects of two to three years’ duration in particle physics. Efforts will range from the development of highly sensitive quantum sensors for the detection of rare particles, to the use of quantum computing to analyze particle physics data, to quantum simulation experiments connecting the cosmos to quantum systems.
Funding of $9.4 million will be provided for six projects of up to three years in duration in fusion energy sciences. Research will examine the application of quantum computing to fusion and plasma science, the use of plasma science techniques for quantum sensing, and the quantum behavior of matter under high-energy-density conditions, among other topics.
Fiscal Year 2019 funding for the two initiatives totals $18.4 million, with out-year funding for the three-year particle physics projects contingent on congressional appropriations.
Quantum Convolutional Neural Networks for High-Energy Physics Data Analysis

(From left to right) Brookhaven computational scientist Shinjae Yoo (principal investigator), Brookhaven physicist Chao Zhang, and Stony Brook University quantum information theorist Tzu-Chieh Wei are developing deep learning techniques to efficiently handle sparse data using quantum computer architectures. Data sparsity is common in high-energy physics experiments.
Over the past few decades, the scale of high-energy physics (HEP) experiments and size of data they produce have grown significantly. For example, in 2017, the data archive of the Large Hadron Collider (LHC) at CERN in Europe—the particle collider where the Higgs boson was discovered—surpassed 200 petabytes. For perspective, consider Netflix streaming: a 4K movie stream uses about seven gigabytes per hour, so 200 petabytes would be equivalent to 3,000 years of 4K streaming. Data generated by future detectors and experiments such as the High-Luminosity LHC, the Deep Underground Neutrino Experiment (DUNE), Belle II, and the Large Synoptic Survey Telescope (LSST) will move into the exabyte range (an exabyte is 1,000 times larger than a petabyte).
These large data volumes present significant computing challenges for simulating particle collisions, transforming raw data into physical quantities such as particle position, momentum, and energy (a process called event reconstruction), and performing data analysis. As detectors become more sensitive, simulation capabilities improve, and data volumes increase by orders of magnitude, the need for scalable data analytics solutions will only increase.
A viable solution could be QIS. Quantum computers and algorithms have the capability to solve problems exponentially faster than classically possible. The Quantum Convolutional Neural Networks (CNNs) for HEP Data Analysis project will exploit this “quantum advantage” to develop machine learning techniques for handling data-intensive HEP applications. Neural networks refer to a class of deep learning algorithms that are loosely modelled on the architecture of neuron connections in the human brain. One type of neural network is the CNN, which is most commonly used for computer vision tasks, such as facial recognition. CNNs are typically composed of three types of layers: convolution layers (convolution is a linear mathematical operation) that extract meaningful features from an image, pooling layers that reduce the number of parameters and computations, and fully connected layers that classify the extracted features into a label.
Neutrino interaction events are characterized by extremely sparse data, as can be seen in the above 3-D image reconstruction from 2-D measurements.
In this case, the scientists on the project will develop a quantum-accelerated CNN algorithm and quantum memory optimized to handle extremely sparse data. Data sparsity is common in HEP experiments, for which there is a low probability of producing exotic and interesting signals; thus, rare events must be extracted from a much larger amount of data. For example, even though the size of the data from one DUNE event could be on the order of gigabytes, the signals represent one percent or less of those data. They will demonstrate the algorithm on DUNE data challenges, such as classifying images of neutrino interactions and fitting particle trajectories. Because the DUNE particle detectors are currently under construction and will not become operational until the mid-2020s, simulated data will be used initially.
“Customizing a CNN to work efficiently on sparse data with a quantum computer architecture will not only benefit DUNE but also other HEP experiments,” said principal investigator Shinjae Yoo, a computational scientist in the Computer Science and Mathematics Department of Brookhaven Lab’s Computational Science Initiative (CSI).
Brookhaven National Laboratory
bnl.gov
Use a stabilizing field for damping. For instance gazing at an object is a stabilizing field….measure it and create non-living something, similar to the gaze-of-attention, of that magnitude….it damps by requiring an energy shift of a certain magnitude for change and distributes noise evenly….via the quantum effect.