Rambus Spins DDR5 DIMM Interface ICs for Data Center Servers

July 19, 2022 by Jeff Child

Adding SPD (serial presence detect) hub and temperature sensor ICs to its DDR5 portfolio, Rambus now has a complete chipset offering for DDR5 DIMM interfacing.

Opening the door for PCs and data center servers to better take advantage of DDR5 DRAM, yesterday Rambus announced SPD (serial presence detect) hub and temperature sensor chips to be used as part of a DDR5 memory module chipset.

The announcement aligns with the company’s 3rd annual Rambus Design Summit 2022 virtual event, which is live today.

In this article, we dive into the details of the new chips, examine the key differences between DDR4 and DDR5 DRAM, and share insights from our interview with John Eble, vice president of product marketing for memory interface chips at Rambus.


Hub and Sensors Complete the DDR5 Interface Chipset

The two new chips—the SPD Hub and the Temperature Sensor complement Rambus’ DDR5 Registering Clock Driver (RCD) chip, and together the three devices form an interface chipset for DDR5 memory modules. The DDR5 RCD IC was announced back in October 2021.


For this DDR5 memory module, the interface chipset consists of an SPD Hub, an RCD, and two Temperature Sensors. 

For this DDR5 memory module, the interface chipset consists of an SPD Hub, an RCD, and two Temperature Sensors. 


According to Rambus the SPD Hub and Temperature Sensor are key devices on a DDR5 memory module in order to sense and report important data for system configuration and thermal management back to the host processor.

The SPD Hub is designed to be used in both server and client memory modules, including SODIMMS, UDIMMs, and RDIMMs. Meanwhile, the temperature sensor is designed for use in server RDIMMs.


Server Demands and the DDR5 Standard

Because DRAM standards tend to stick around for many years, new standards have to be aligned with both current and future computing trends. DDR5’s predecessor DDR4, for example, has been around for almost 10 years, says Eble, and is expected to be around for at least 10 more.

Memory capacity and transfer bandwidth, of course, remain the most fundamental requirements in DDR standards, says Eble. On the processor side, we’re past the era of CPU vendors battling on clock speeds, IPC (inter-processor communication), and threading. Now it’s all about scaling up the number of processor cores, says Eble.


"The megatrend now is all about the number of cores you can pack into a processor socket. Now the memory bandwidth capacity has to scale proportionally to the number of cores. And that's really what I think is driving the new DDR5 standard."


Another tricky requirement is to enable servers to maintain the same memory access granularity as the previous standard. That means doing memory transactions on a cache line basis at 64 bytes.

Another demand, according to Eble, is that you need to have a very low error rate with your DRAM modules, and you need to be able to either recover from the error or know when there's an error.

That means having both single errors correct and dual error detect is a must to be able to survive one of the DRAMs on a module going down. Being able to recover from that is also a requirement, says Eble.


The DDR5 standard was crafted to keep pace with these demanding requirements from today’s server designs.

The DDR5 standard was crafted to keep pace with these demanding requirements from today’s server designs.


As processors race ever faster, heat dissipation is an ongoing challenge. Data center server designs need to remove heat, and stay within their power envelope. DIMMs are typically designed to be about 15 W per DIMM and that continues to be the target in DDR5-based designs, says Eble.

“We need to increase efficiency so that, as we go to higher memory bandwidths, the power stays more or less constant,” he says. Temperature sensors help facilitate that. Using such sensors isn’t new in DDR systems, but in the DDR5 era they take on new importance.

According to Eble, another key demand you see with these advanced memory interfaces is that they require a lot of “training.” Training in this context means that there’s a lot of setup required when the system boots up, and those steps need to be done in a reasonable amount of time. That’s a key reason why DDR5 moved from I2C to the faster I3C as its signal interface, says Eble.


Differences Between DDR4 and DDR5

Aside from supporting faster data rates and clock rates, the DDR5 standard offers several key differences between it and its predecessor DDR4. The table below describes these differences. Importantly, while DDR4 is a single channel per DIMM architecture, DDR5 implements 2 channels per DIMM. That extra channel gives you a degree of parallelism from the memory controller perspective, says Eble. “Efficiency and the ability to handle high loads are important. So in a loaded situation, you should see better latency from DDR5,” he says.


DDR5 offers a number of advantages over DDR4 in performance, capacity, and low power.

DDR5 offers a number of advantages over DDR4 in performance, capacity, and low power. (Click image to enlarge)


The changes to the CA (command/address) bus in DDR5 are a big deal, according to Eble. A 10-bit DDR per channel CA bus is implemented using a packetized protocol. This provides enough CA bandwidth to enable two channels in a smaller number of DIMM pins.

Eble says the real capacity increases for DDR5 come from increasing the maximum size of a single die package DRAM from 16 Gb up to 64 Gb. That means that, for a dual bank of x4 DIMMs, the maximum DIMM capacity would move from 64 GB to 256 GB. And if you need more than that, Eble points out there are DRAM suppliers building stacked configurations of DRAM. And the addressing scheme moves from a maximum of eight high to 16 high for DDR5, he says.

Power efficiency is also increased in DDR5. In DDR5, the I/O voltages are reduced from 1.2 V to 1.1 V.  Meanwhile, in DDR5 the CA bus has changed from SSTL style signaling to PODL (pseudo open drain logic), which is the same as the data bus. “That has the nice feature of, when all the bits are parked high, you draw no static power from the transmission line,” says Eble.


Details: SPD Hub and Temperature Sensor ICs

According to Eble, the new chipset—consisting of the new SPD Hub and Temperature Sensor, plus the previously announced RCD IC—represents a new approach to memory module interfacing for Rambus. Up until now, he says, the company’s memory interface business focused on signal integrity chips that helped increase the bandwidth of the memory module. This new chipset takes a more system-level approach to improve performance and support the DDR standard, in this case, DDR5.

For its part, the SPD Hub chips are the devices that JEDEC has defined for DDR5. According to Eble, the SPD Hub meets or exceeds all JEDEC DDR5 SPD Hub operational requirements (per JEDEC spec JESD300-5A).

Features of new the SPD Hub (SPD5118) include:

  • I2C and I3C bus serial interface support
  • Expanded NVM (non-volatile memory) space for customer-specific applications
  • Low latency to maximize I3C bus rates
  • Integrated temperature sensor

“SPD, which stands for ‘serial presence detect,’ has been on previous JEDEC DDR standards,” says Eble, “But this new device is a bit of a Swiss Army knife and acts as a kind of a gateway to the module.” Like past SPD devices, it contains non-volatile configuration information that defines the key parameters of the DIMM.

The SPD Hub enables the system to discover what type of memory is present, its capacity, and what the timings are. That data is shared over an I3C-based system management bus. Previous DDR standards used I2C. “The SPD Hub also functions as an I3C bi directional redriver,” says Eble, “So it sort of hides the loads on the DIMM from the system, allowing it to run faster.” The SPD Hub integrates its own high-accuracy temperature sensor within it. That enables the system to check what the temperature is in that area of the DIMM. 


On a DDR5 DIMM, the SPD Hub communicates with the Temperature Sensors, RCD, and PMIC using the I3C bus.

On a DDR5 DIMM, the SPD Hub communicates with the Temperature Sensors, RCD, and PMIC using the I3C bus. (Click image to enlarge)


The other newly announced chip is the Temperature Sensor (TS5110). This device is likewise defined by JEDEC for the DDR5 standard. It is designed to meet or exceed all JEDEC DDR5 Temperature Sensor operational requirements (per JEDEC spec JESD302-1.01), says Eble.

The Temperature Sensor is a standalone I3C sensor device. These devices can be placed on other spatially different places on the DIMM, says Eble. Using two of those sensors (as shown in the diagram above), plus the SPD Hub, you can get up to three different measurements on the DIMM.

According to Eble, a data center server’s processor can use the temperature sensor data in many ways. It can use it to regulate fan speed and noise—this also helps reliability based on the temperature. It can change the refresh rate of the DRAM, so you don't have to worry about any refresh errors, or you can save power. And, as a last resort, the CPU can decide to throttle the bandwidth until the temperature goes down, says Eble.

Product briefs for both the SPD Hub (SPD5118) and Temperature Sensor (TS5110) are available for download.


DDR5: Built for a New Era of Computing

Just like its predecessor DDR standards, DDR5 had to be crafted to be well aligned with not just current-day processor and memory technology trends but also those of the future. Such care is vital to enable DDR5 to last for decades, like its predecessors. That means an architecture that can feed processors with an ever-increasing number of cores, along with other considerations. These new Rambus chips seem to put the goodness of DDR5 into action, and hopefully, designers of data center servers and advanced PCs can take advantage of this. 


All images used courtesy of Rambus