News

Samsung, Micron, and SK Hynix Lead the Charge on HBM3E DRAM

March 13, 2024 by Duane Benson

Three companies—Samsung, Micron, and SK Hynix—have announced moves in the high-bandwidth memory (HBM) 3E market.

According to a recent article published by Trendforce, Samsung, Micron, and SK Hynix are leading the HBM3E race. The three players all have major initiatives in the arena and are vying for strong returns on investment in the technology.

 

What Is HBM3E DRAM?

High-bandwidth memory, version 3E (HBM3E), is the latest standard for high-bandwidth memory (HBM) SDRAM used in the highest-performance installations, such as data center processors, graphics accelerators, and AI accelerators. HBM3E uses a stacked die format common in on-package configurations, with the CPU and HBM memory stack on the same interposer/package substrate.

 

Typical processor architecture with HBM

Typical processor architecture with HBM. Image used courtesy of Shmuel Csaba Otto Traian via Wikimedia Commons (CC BY-SA 4.0)
 

HBM was born out of the attempts to improve the performance of graphics accelerators by putting as much memory as close to the processor die as possible. HBM is sometimes referred to as a 2.5D/3D architecture. “3D” refers to the three-dimensionally-stacked memory. The “2.5D” describes the interconnection with the processing unit. The data path includes through-silicon vias (TSV) and a 1,024-bit wide data path. These 1,024 data lines, plus nearly 700 command, address, and utility connections, are etched into a silicon interposer (the “.5” in “2.5D”) on which both the memory stack and the processor sit.

Like most computing advances, the primary advantages between 3E and prior incarnations are reduced power draw and higher speeds. Device capacity has not increased with the 3 to 3E bump and remains at a maximum of 16 die stacks for 32 GB, with the ability to pair two stacks on a single processor interposer.

 

Samsung Offers Highest Capacity HBM to Date

Samsung has developed a 12-layer (12H) HBM3E stack with a 36 gigabytes (GB) total capacity, purportedly the largest capacity HBM3E product at this time. Samsung’s HBM3E 12H delivers 1.28 TB/s memory bandwidth. Both the capacity and speed are 50% higher than Samsung’s prior 8-stack HBM3 (8H) stack product.

 

Samsung HBM3E 12H

Samsung HBM3E 12H (12 stacked die). Image used courtesy of Samsung
 

The 12H uses a thinner nonconductive thermal interconnect layer, allowing it to maintain the same height dimensions as the prior 8H stacked product. This allows the higher-capacity part to fit within the same packaging dimensions as the earlier 8H parts. The thermal compression non-conductive film (TC NCF) is a critical element of the product that dictates the overall height and carries heat out from the inner layers.

 

Micron's HBM3E Offers 30% Lower Power Consumption

Micron recently announced that its HBM3E memory product promises greater than 1.2 TB/s bandwidth and a 30% lower power consumption than similar competitive offerings. It also plans to offer up to 24 GB capacity. The Micron HBM3E launch products will come in an 8-layer, 24-GB stack that will ship in Q2 2024 with the new Nvidia H200 Tensor Core GPU.

 

Micron HBM3E stack

Micron HBM3E stack. Image used courtesy of Micron
 

Micron believes that power efficiency in its HBM3E stacks will improve data center operating costs as the demand for high-performance AI computing grows. Micron is behind Samsung and SK Hynix in market share and is already looking beyond HBM3E to the next revision, HBM4. Micron hopes its HBM3E installations and early HBM4 work will give it a path to a greater share of the HBM market.

 

SK Hynix: First to Market With HBM3E?

SK Hynix has reportedly promised the first deliveries of its HBM3E memory stack this month (March 2024). The parts are planned for delivery to Nvidia.

 

SK Hynix HDM3E memory chips

SK Hynix HDM3E memory chips. Image used courtesy of SK Hynix
 

While SK Hynix has not revealed the details of its deliveries or plans, it has reported in financial statements that it expects strong growth in its HBM shipments, up to 100 million units by 2030. Some of the scaling will take place in South Korea, and there are also plans for an HBM plant in Indiana in the U.S. to produce HBM stacks for Nvidia. SK Hynix created an HBM division to enable greater focus on the high-value market.

 

What’s Next in HBM DRAM?

HBM moves RAM closer to the processing unit, a key driver for its adoption. The performance gains it provides are critical for both server-based generative AI and edge-based interpretive AI.

To that end, Samsung is developing processing in memory (PIM), which will offload some of the processing burden into memory by putting an AI engine within each memory bank. This arrangement enables parallel processing and less memory movement. Meanwhile, Micron is working toward the next industry standard, HBM4, which should be finalized in 2025, with the first parts shipping in 2026. SK Hynix is solidifying HBM management and ramping up factory expansion.

With these major initiatives well underway, all three players are hoping for strong growth and continued innovation in the DRAM market.