The AI industry’s relentless growth is fueling an unprecedented demand for high-bandwidth memory (HBM), and major players like SK hynix, Samsung, and Micron are working tirelessly to meet this need. With SK hynix already sold out of HBM3 and HBM3E for 2024 and 2025, the race is on to become the leading supplier of the next generation: HBM4.
Samsung has announced its plans to tape-out its HBM4 memory in the fourth quarter of 2024, with mass production slated for the end of 2025. This move puts Samsung in direct competition with SK hynix, who has already secured a major partnership with TSMC for HBM4 production. Samsung’s ambition is fueled by the growing demand from AI GPU companies like NVIDIA and AMD, who are eagerly anticipating the release of HBM4. NVIDIA’s next-generation Rubin R100 AI GPU, expected to launch in Q4 2025, is specifically designed to leverage HBM4 memory.
Samsung’s strategy centers around its in-house 4nm process node, which will be used to manufacture the logic dies at the heart of its HBM4 chips. This advanced process node, already employed in Samsung’s Exynos 2400 chipset for its Galaxy S24 smartphones, offers superior performance and yields exceeding 70%. The choice of 4nm over the previously expected 7nm or 8nm process node underlines Samsung’s commitment to delivering cutting-edge HBM technology.
The competition for HBM dominance is fierce, with SK hynix currently dominating the market alongside NVIDIA. However, Samsung’s strategic investments in advanced manufacturing, coupled with its established position in the semiconductor industry, position it as a formidable contender in the race for AI supremacy. The future of AI hinges on the ability to deliver increasingly powerful and efficient memory technologies, and Samsung’s HBM4 is poised to play a critical role in shaping this future.