Rambus has revealed details about its upcoming HBM4 memory controller, promising significant upgrades over current HBM3 and HBM3E technologies. The new controller will offer faster speeds, greater bandwidth, and improved capabilities, pushing the boundaries of high-performance computing for AI and data center applications.
Results for: HBM4
Rambus has introduced the industry’s first HBM4 controller IP, designed to accelerate next-generation AI workloads. This innovative controller enables cutting-edge AI accelerators, graphics, and HPC applications to leverage the capabilities of HBM memory, unlocking significant performance gains.
Samsung, the world’s largest memory chipmaker, is collaborating with TSMC, the global leader in contract chip manufacturing, to develop bufferless HBM4 memory for future AI chips. This strategic partnership aims to solidify their dominance in the evolving AI chip market.
SK hynix is set to begin mass production of its advanced 12-layer HBM3E memory chips for AI applications by the end of September, with shipments expected to start in Q4 2024. The company is also pushing forward with its next-generation HBM4 memory, slated for release in the second half of 2025, designed to be compatible with NVIDIA’s upcoming Rubin R100 AI GPU.
SK hynix is nearing the completion of its HBM4 memory, crucial for NVIDIA’s next-gen Rubin R100 AI GPUs. HBM4 offers significant performance and efficiency improvements, promising a leap forward in AI capabilities.
Samsung is gearing up to become a major player in the high-bandwidth memory (HBM) market, with plans to tape-out its next-generation HBM4 memory in Q4 2024 and begin mass production by the end of 2025. This move comes as the AI industry experiences a surge in demand for HBM, driven by companies like NVIDIA and AMD. Samsung’s 4nm process node will be key in manufacturing these high-performance chips, offering a competitive edge in the race for AI dominance.