Micron Technology, a global leader in memory and storage solutions, has begun volume production of its HBM3E (High Bandwidth Memory 3 Extended) solution. The 24GB 8H HBM3E will be integrated into NVIDIA's H200 Tensor Core GPUs, which are set to ship in the second calendar quarter of 2024. This development positions Micron as a leader in providing AI solutions with industry-leading performance and energy efficiency.

As AI demand continues to grow, memory solutions must meet the expanding requirements of advanced workloads. Micron’s HBM3E addresses this need through three key attributes: superior performance, exceptional efficiency, and seamless scalability. The memory delivers pin speeds exceeding 9.2 gigabits per second (Gb/s), providing over 1.2 terabytes per second (TB/s) of memory bandwidth for AI accelerators, supercomputers, and data centers. Its power consumption is approximately 30% lower than competing offerings, ensuring maximum throughput with minimal energy usage. With 24GB of capacity, HBM3E enables data centers to scale their AI applications efficiently, supporting tasks such as training large neural networks and accelerating inferencing.
Micron achieved this milestone by leveraging its 1-beta technology, advanced through-silicon via (TSV), and other innovations. These advancements highlight Micron's expertise in 2.5D/3D-stacking and advanced packaging technologies. As a member of TSMC’s 3DFabric Alliance, Micron plays a pivotal role in shaping the future of semiconductor and system innovations.
The company is further extending its leadership with the sampling of a 36GB 12-High HBM3E, scheduled for March 2024. This next-generation solution will deliver over 1.2 TB/s performance and superior energy efficiency compared to competitors. Micron will also showcase its AI memory portfolio and roadmaps at NVIDIA GTC, a global AI conference starting on March 18. The event highlights Micron's commitment to driving innovation in memory solutions for AI-driven technologies.