Rambus Unveils Next-Gen HBM4 Memory Controller: A Leap Forward for AI and Data Centers

Rambus has unveiled more details about its upcoming HBM4 memory controller, promising significant upgrades over current HBM3 and HBM3E memory controllers. While JEDEC, the organization that sets industry standards, is still finalizing the HBM4 memory specifications, Rambus is already teasing its next-generation controller designed to meet the demands of the rapidly evolving AI and data center markets. This new controller is poised to push the boundaries of existing HBM DRAM designs.

Rambus’ HBM4 controller boasts impressive speed capabilities, achieving over 6.4Gb/s per pin. This is faster than the first-generation HBM3 and provides more bandwidth than even the faster HBM3E memory, despite using the same 16-Hi stack configuration and 64GB maximum capacity. The starting bandwidth for HBM4 is a staggering 1638GB/sec (1.63TB/sec), representing a 33% improvement over HBM3E and a 2x leap over HBM3. HBM3E currently operates at 9.6Gb/s speeds with up to 1.229TB/sec of memory bandwidth per stack, but HBM4 will push the limits even further with up to 10Gb/s speeds and a massive 2.56TB/sec of bandwidth per HBM interface. This represents a 2x increase over the newly released HBM3E. However, it’s worth noting that the full potential of HBM4 memory won’t be fully realized for some time. NVIDIA’s upcoming Rubin R100, slated for release in 2026, will be the first to utilize HBM4 technology.

Rambus has also highlighted other key features of HBM4, including ECC (error correction code), RMW (Read-Modify-Write), and Error Scrubbing, all of which contribute to improved data integrity and reliability.

Looking ahead, the industry is actively preparing for the HBM4 era. SK hynix, a leading South Korean memory manufacturer, is currently mass-producing 12-layer HBM3E memory with capacities up to 36GB and speeds of 9.6Gbps. However, the company is expected to begin tape-out for next-gen HBM4 memory next month. Meanwhile, Samsung is gearing up for mass production of HBM4 by the end of 2025, with tape-out expected in Q4 2024.

These advancements in memory technology are closely tied to the development of next-generation AI GPUs. NVIDIA’s Rubin R100, specifically, is expected to feature a 4x reticle design (compared to the 3.3x reticle design of the Blackwell generation) and leverage TSMC’s cutting-edge CoWoS-L packaging technology on the new N3 process node. TSMC recently discussed plans for chips exceeding 5.5x reticle size by 2026, utilizing a 100 x 100mm substrate capable of handling 12 HBM sites, compared to the 8 HBM sites on current-gen 80 x 80mm packages. TSMC is also exploring a new SoIC design that would enable even larger chip sizes exceeding 8x reticle size on a larger 120 x 120mm package configuration. However, these plans are still in the early stages, so the Rubin R100 AI GPUs are likely to utilize a 4x reticle size.

The combined advancements in memory technology and chip design represent a significant leap forward for AI and data center applications. As the industry embraces these innovations, we can expect to see further breakthroughs in performance, capacity, and efficiency.

Leave a Comment

Your email address will not be published. Required fields are marked *

Scroll to Top