
[Image courtesy of Adobe Stock]
That’s a lot of power.
That level of electricity equates to about 15 nuclear power plants, 6,000 wind turbines, or 1.5 billion LED bulbs. See Figure 1 for a sense of scale.
The rapid growth of AI is placing unprecedented demands on power infrastructure. AI models, particularly large language models (LLMs), require immense computational resources and energy. As we look to benefit from these technological advances, decrease electricity consumption sourced from fossil fuels, and lessen the impact of climate change simultaneously, we’re facing a paradox in power electronics that demands attention.
As AI’s energy requirements continue to rise, data centers must adopt efficient power electronics designs, including LLC (inductor–inductor–capacitor) converters and advanced capacitor technologies, to minimize energy losses and conserve space.

Figure 1.
Core capacitor roles in power infrastructure
Regardless of application, capacitors are workhorse components that address voltage stability, noise suppression, and efficiency. These versatile core functions make them indispensable at every level of power delivery in modern data centers.
Energy storage and release is a critical capability for bridging power gaps and maintaining continuity during transient conditions. The ability to provide immediate energy at the right moment is foundational to keeping data centers running without interruption.
Voltage stabilization is essential for supporting sensitive equipment. Hardware like power supply units (PSUs) and DC/DC converters rely on capacitors to smooth ripples and maintain a steady output, preventing performance degradation caused by voltage fluctuations.
By filtering out high-frequency noise and suppressing electromagnetic interference (EMI), capacitors help ensure clean and reliable power delivery, which is particularly important in the presence of switching components. As data centers push the boundaries of computational density, noise filtering and suppression are vital capacitor functions too.
The journey of power from grid to chip in AI data centers
The journey of power from the electrical grid through a data center is more than a mechanism for power delivery, it’s an opportunity to set a foundation for precise, stable, and efficient operations (Figure 2).

Figure 2. A simplified overview illustrating the journey of power from the grid to the chip within an AI server. ATS = Automatic Transfer Switch. LV PD = Low Voltage Power Distribution. ‘Low Voltage’ in the context means 400V DC. UPS = Uninterruptable Power Supply. PDU = Power Distribution Unit. PSU = Power Supply Unit. POL = Point of Load (e.g. at the AI Chip, GPU, etc.).
At the utility level, power delivery begins with electricity flowing from the grid at hundreds of kilovolts, often supported by a variety of independent feeds to ensure availability. Once it reaches the building level, power management begins with receiving medium-voltage power (e.g., tens of kilovolts) from the utility feed and converting it into usable forms (e.g., hundreds of volts) for downstream systems. The process involves conditioning and distributing power for delivery to the racks. Determining when and where to convert to DC is pivotal. While engineering design trends shift, today, it’s common to see DC distribution occur at about 400V.
At the backup level, uninterrupted power delivery, via uninterruptible power supplies (UPS) and fuel-driven generators, safeguards the continuity of data center operations. Capacitors and passive components are essential to this architecture, supporting rapid energy release via supercapacitors in UPS systems, regulating current flow through inductors in inverters, and offering surge protection with capacitors and passive filters. Ensuring a seamless transition to backup power is critical, as even milliseconds of downtime can disrupt sensitive workloads.
At the rack level, power distribution units (PDUs) and power supply units (PSUs) work together to convert and distribute power to servers, storage devices, and GPUs/TPUs, all while operating within the physical constraints of the rack. Thermal management must account for high power density within those physical constraints.
On the board, where power transitions from hundreds of volts to tens of volts, power delivery becomes more refined. Depending on the server’s design, this is the realm of Intermediate Bus Converters (IBCs). To ensure clean and reliable power delivery within tight tolerances, capacitors and other passive components support voltage stabilization, current smoothing, and noise suppression via bulk capacitors, decoupling capacitors, inductors, and EMI filters.
At the chip level, modern AI processors like GPUs and TPUs operate at extremely low voltages (often ~1V) with exceptionally high currents. Within the board, point-of-load (POL) DC/DC converters step down power to supply these ultra-low voltages, often managing three to five distinct voltage levels simultaneously.
Advanced capacitor technologies present efficiency gains at the rack level
While there are opportunities for efficiency gains across the journey from the grid to the data center, the rack level serves as a critical intermediary between building-level power distribution and individual servers and devices.
PSUs are tasked with converting the power distributed to the server rack into power for individual server blades. Depending on the power distribution strategy inside the data center, the PSU may need to perform AC/DC and DC/DC conversion or handle a DC/DC block.
Power electronics engineers play a pivotal role in orchestrating the seamless transformation of power through the precise voltage, current, and power adjustments. Completing the transformation efficiently ensures that data centers operate with minimal energy waste. Far from being a simple component, the capacitor is a cornerstone of control, enabling the precise management of how voltage, current, and power evolve over time.
Within the functional blocks of a PSU, there are significant opportunities to increase efficiency, reduce energy loss, improve power density, and ensure more space on the rack is dedicated to compute resources.
LLC resonant converters are commonly used for DC/DC blocks, particularly in applications requiring high efficiency, low noise, and compact designs. Within an LLC converter, the LLC circuit features a resonant capacitor that tunes a set of series or parallel resonant circuits operating in the hundreds of kHz range. This circuit achieves zero-voltage switching (ZVS) and zero-current switching (ZCS) across the full operating range, enabling higher switching frequencies, a smaller component footprint, and reduced electromagnetic interference (EMI).
Another key functional block within the LLC resonant converter is the resonant inverter, which uses a switching network to convert DC input voltage into a square wave suitable for the LLC circuit. The LLC circuit filters out higher-order harmonics by selectively absorbing maximum power at the resonant frequency of the square wave, then releases a differently shaped sinusoidal voltage through magnetic resonance. This waveform is scaled up or down by the transformer, rectified, and filtered into its final form: a converted DC output voltage.
The efficiency of this process is closely tied to the Q factor of the resonator, which depends on the equivalent series resistance (ESR) and the predictability of constant capacitance. Low-loss capacitors (i.e., capacitors with low ESR and high Q) have already demonstrated significant efficiency gains in LLC resonant converters for electric vehicles, and they are poised to deliver similar benefits in data centers. In electric vehicles, these capacitors minimize internal losses, reduce heat dissipation, and support higher switching frequencies. These factors are equally essential for the performance and reliability of power conversion systems in data centers. With the right power electronics involved, we can make more efficient use of the power going into data centers, further sustainability goals, and close the energy gap in AI.
Peter Matthews is technical director of Knowles Precision Devices.