Researchers Collab With Intel, Google on “Energy Processing Unit”
Data centers use 90 billion kilowatt-hours of electricity a year. Now, a new unit can deliver more power to servers—without sacrificing speed or efficiency.
Data centers play a central role in modern IT infrastructure, offering all of the high-performance computing needed for enterprise data management and storage. As data center computers have advanced and workload demands have increased so has the amount of power they consume. Incredibly, the power density (power delivered per unit area) of modern chips is reaching levels seen in nuclear reactors.
Power density of various Intel processors. Image courtesy of Intel
This drastic increase in power density can be understood by Moore’s Law. Over the past several decades, the number of transistors on a single chip has doubled approximately every two years as transistor node size has decreased. However, the power consumed by individual transistors has not decreased proportionally with transistor size, and consequently, the power density of chips has increased dramatically.
New Power Delivery Device for Data Centers
In order to deliver power to small, densely-packed integrated circuits, efficiency is key. If power is delivered inefficiently to a chip, overheating can occur. Overheating is a serious concern for processors and can ultimately cause them to operate improperly or stop working altogether. Additionally, more efficient power delivery is less costly because cooling costs go down.
Targeting maximum power delivery efficiency, researchers at Princeton and Dartmouth, in collaboration with Intel and Google, have created an “energy processing unit” that can supply thousands of data center CPUs and hard drives at very high efficiency.
The energy processing unit is small in scale, but high in power—reported to increase the delivery of power to high-speed computers 10 fold. Image used courtesy of Princeton University
The Vertically-stacked Architecture Behind the Unit
The energy processing unit is based on the LEGO-PoL CPU voltage regulator design. This 48 V–1 V voltage regulator design is targeted at data center applications and can handle up to 780 A of current with a peak efficiency of approximately 91%.
The LEGO-PoL architecture uses switched capacitors in a two-stage design. Image courtesy of IEEE
According to the team's published research, the LEGO-PoL regulator architecture makes use of two stages: a switched capacitor unit and a multiphase buck unit. The switched capacitor unit, also referred to as the unregulated stage, efficiently reduces the input voltage from 48 V to a lower bus voltage, such as 12 V. The next stage, also referred to as the regulated stage, regulates the output voltage with a high control bandwidth.
The LEGO-PoL architecture also vertically stacks the switched capacitor and buck units.
Vertical stacking of LEGO-PoL regulators. Image courtesy of IEEE
The key advantage of vertically stacking the units is the increased current density of the regulator. According to the same paper, vertical stacking presents the added challenge of increasing the overall height and weight of the system. This can be mitigated by using a more advanced capacitor and packaging technologies.
Power Efficiency: A Data Center Priority
As the demand for data and faster connectivity grows with each passing year, semiconductor corporations are focusing on developing more energy-efficient processors. According to AMD CTO Mark Papermaster, AMD has achieved a 6.79x energy efficiency improvement in its 2022 computing technology versus its 2020 baseline.
Still, processor efficiency improvements are not enough; power must be delivered from the grid to the processors themselves efficiently. Therefore, novel architectures like LEGO-PoL could play an increasingly important role in power delivery in future data centers.