New AMD Instinct MI325X accelerator expected to be available in Q4 2024 while new AMD Instinct MI350 series accelerators based on AMD CDNA 4 architecture expected to be available in 2025

AMD showcased the growing momentum of the AMD Instinct accelerator family at the just-concluded Computex 2024, which gathered around 1,500 companies from 36 countries as participants.

During the opening keynote by Chair and CEO Dr. Lisa Su, AMD unveiled a multiyear, expanded AMD Instinct accelerator roadmap which will bring an annual cadence of leadership AI performance and memory capabilities at every generation.

In a press release, AMD shared that the updated roadmap starts with the new AMD Instinct MI325X accelerator, which will be available in Q4 2024. Following that, the AMD Instinct MI350 series, powered by the new AMD CDNA 4 architecture, is expected to be available in 2025, bringing up to a 35x increase in AI inference performance compared to the AMD Instinct MI300 Series with AMD CDNA 3 architecture.

The AMD Instinct MI400 series (IMAGE CREDIT: https://wccftech.com/)

Expected to arrive in 2026, the AMD Instinct MI400 series is based on the AMD CDNA “Next” architecture.

“The AMD Instinct MI300X accelerators continue their strong adoption from numerous partners and customers including Microsoft Azure, Meta, Dell Technologies, HPE, Lenovo and others, a direct result of the AMD Instinct MI300X accelerator’s exceptional performance and value proposition,” said Brad McCredie, corporate vice president, Data Center Accelerated Compute, AMD.

“With our updated annual cadence of products, we are relentless in our pace of innovation, providing the leadership capabilities and performance in the AI industry and our customers expect to drive the next evolution of data center AI training and inference,” he added.

AMD AI Software Ecosystem Matures

The AMD ROCm 6 open software stack continues to mature, enabling AMD Instinct MI300X accelerators to drive impressive performance for some of the most popular LLMs. On a server using eight AMD Instinct MI300X accelerators and ROCm 6 running Meta Llama-3 70B, customers can get 1.3x better inference performance and token generation compared to the competition.

On a single AMD Instinct MI300X accelerator with ROCm 6, customers can get better inference performance and token generation throughput compared to the competition by 1.2x on Mistral-7B.

AMD also highlighted that Hugging Face, the largest and most popular repository for AI models, is now testing 700,000 of their most popular models nightly to ensure they work out of the box on AMD Instinct MI300X accelerators. In addition, AMD is continuing its upstream work into popular AI frameworks like PyTorch, TensorFlow and JAX.

Brandcomm

AMD Previews New Accelerators and Reveals Annual Cadence Roadmap

During the keynote, AMD revealed an updated annual cadence for the AMD Instinct accelerator roadmap to meet the growing demand for more AI computing. This will help ensure that AMD Instinct accelerators propel the development of next-generation frontier AI models. The updated AMD Instinct annual roadmap highlighted:

  • The new AMD Instinct MI325X accelerator, which will bring 288GB of HBM3E memory and 6 terabytes per second of memory bandwidth, uses the same industry standard Universal Baseboard server design used by the AMD Instinct MI300 series, and will generally be available in Q4 2024. The accelerator will have industry-leading memory capacity and bandwidth, 2x and 1.3x better than the competition respectively, and 1.3x better compute performance than the competition.
  • The first product in the AMD Instinct MI350 Series, the AMD Instinct MI350X accelerator, is based on the AMD CDNA 4 architecture and is expected to be available in 2025. It will use the same industry standard Universal Baseboard server design as other MI300 Series accelerators and will be built using advanced 3nm process technology, support the FP4 and FP6 AI datatypes and have up to 288 GB of HBM3E memory.  
  • AMD CDNA “Next” architecture, which will power the AMD Instinct MI400 Series accelerators, is expected to be available in 2026 providing the latest features and capabilities that will help unlock additional performance and efficiency for inference and large-scale AI training.

Finally, AMD highlighted the demand for AMD Instinct MI300X accelerators continues to grow with numerous partners and customers using the accelerators to power their demanding AI workloads, including:

·        Microsoft Azure using the accelerators for Azure OpenAI services and the new Azure ND MI300X V5 virtual machines.

·        Dell Technologies using MI300X accelerators in the PowerEdge XE9680 for enterprise AI workloads.

·        Supermicro providing multiple solutions with AMD Instinct accelerators.

·        Lenovo powering Hybrid AI innovation with the ThinkSystem SR685a V3

·        HPE is using them to accelerate AI workloads in the HPE Cray XD675.   

Read more AMD AI announcements at Computex here and watch a video replay of the keynote on the AMD YouTube page.

Supporting Resources

By Ralph Fajardo

Ralph is a dynamic writer and marketing communications expert with over 15 years of experience shaping the narratives of numerous brands. His journey through the realms of PR, advertising, news writing, as well as media and marketing communications has equipped him with a versatile skill set and a keen understanding of the industry. Discover more about Ralph's professional journey on his LinkedIn profile.