AMD has made a significant stride in the burgeoning generative AI and HPC market by launching the Instinct MI300X AI accelerator and the Instinct MI300A data center APU at its Advancing AI event in San Jose, California. This launch signifies AMD’s focus to capitalize on the surging demand for advanced AI technology.
AMD introduces a groundbreaking chiplet-based design with the Instinct MI300X. This accelerator features 304 compute units, 192GB HBM3 capacity, and 5.3 TB/s bandwidth, consuming 750W.
The MI300X integrates eight 12Hi stacks of HBM3 memory and eight 3D-stacked 5nm CDNA 3 GPU chiplets on four 6nm I/O dies. This amalgamation showcases an extraordinary fusion of cutting-edge technologies.
|Up to 10.4 Petaflops
AMD designed the MI300X accelerator for its generative AI platform, ensuring easy integration. The accelerator enables seamless scalability, boasting an impressive throughput of 896 GB/s between GPUs.
It achieves this through an Infinity Fabric interconnect. The system adheres to the Open Compute Project (OCP) Universal Baseboard (UBB) Design standard. This design standard simplifies the adoption process, especially for hyperscalers.
Instinct MI300A for Data Center Computing
However, the Instinct MI300A marks a significant milestone in data center computing as the world’s first data center APU. It incorporates thirteen chiplets, several of which utilize 3D stacking technology.
This APU comprises twenty-four Zen 4 CPU cores integrated with a CDNA 3 graphics engine and eight stacks of HBM3. Notably, it boasts an impressive 153 billion transistors, making it the largest chip ever produced by AMD.
|13 chiplets, many 3D-stacked, comprising 24 Zen 4 CPU cores
|Up to 4X more than Nvidia’s H100 GPUs in certain workloads
|192GB of HBM3
|Twice the performance per watt of Nvidia’s competing products
AMD’s strategy for the MI300A centers on a revolutionary design, yielding significant performance improvements across diverse workloads. This approach establishes a groundbreaking standard in the data center computing arena.
AMD shared performance metrics for MI300A and MI300X, highlighting their strengths. Specifically, MI300X surpasses Nvidia H100 in AI inference and matches its training performance. This positions MI300X as a robust alternative to Nvidia’s GPUs.