AMD has announced enhancements to its AI computing line, including EPYC processors that provide greater performance and compatibility with a variety of workloads. These processors surpass Nvidia Grace in critical enterprise AI tasks, with 2.75x superior power efficiency in dual-socket systems and 2.17x higher performance in database workloads. The Ryzen AI MAX+ 395, which combines Zen 5 CPU cores, a 50 TOPS XDNA 2 NPU, and an integrated GPU, provides unparalleled AI performance for premium thin and light devices. The Versal AI Edge adaptable SoC has gained Class B spaceflight accreditation, making it suitable for critical space applications.
In This Article
AMD EPYC CPUs
AMD EPYC CPUs are built for versatility in data centres, allowing them to support both AI and traditional workloads while minimising operational costs. AMD EPYC processors, which are based on the x86 architecture, provide better performance, efficiency, and workload compatibility than Arm-based systems. They beat Nvidia’s Grace Superchip in critical tasks like general-purpose computing, database transactions, AI inference, and high-performance computing (HPC).
These CPUs have exceptional x86 core density, with up to 192 cores per socket, allowing for high-performance execution of AI inference and general compute operations. They also provide industry-leading CPU memory capacity and bandwidth, with support for terabytes of DDR5 memory, which is crucial for scaling traditional workloads and AI models that require enormous datasets. The x86 architecture enables smooth AI adoption without requiring extensive code rewrites or costly software porting activities. AMD EPYC exceeds NVIDIA Grace CPU Superchip in power efficiency by up to 2.75xii, allowing companies to implement AI inside their existing x86 compute infrastructure while also deploying GPU-accelerated workloads as needed.
Also Read: Lenovo Idea Tab Pro with AI features and Tab Pen Plus compatibility launched in India
Fifth-generation AMD EPYC processors are the best choice for maximising GPU-enabled cluster performance, with up to 20% higher throughput than x86 solutions. These processors have clock speeds of up to 5 GHz, which is 16% higher than Intel’s highest turbo frequency part and 3.1GHz more than Nvidia Grace Superchip’s base frequency. This enhanced clock speed enables faster data transfer, task orchestration, and efficient GPU connection, which are required for high-volume, low-latency AI training and inference operations.
Leadership memory support for AI workloads enables full models and datasets to be stored in system memory, reducing storage read/write operations. This is vital for real-time AI systems that require quick data access. AMD EPYC processors support up to 160 PCIe Gen5 lanes in dual-socket configurations, allowing for fast transfers across GPUs, storage, and networking infrastructure utilising industry-standard standards.
AMD Ryzen AI MAX+ 395
The AMD Ryzen AI MAX+ 395, commonly known as “Strix Halo,” is the most powerful x86 APU on the market, with substantial performance advantages over competitors. It has 16 “Zen 5” CPU cores, 50+ peak AI TOPS XDNA™ 2 NPU, and a large integrated GPU powered by 40 AMD RDNA 3.5 CUs. The Ryzen AI MAX+ 395 supports system memory ranging from 32GB to 128GB, with up to 96GB converted to VRAM via AMD Variable Graphics Memory. It thrives in consumer AI workloads such as LM Studio, allowing users to run the most recent language model without requiring technical knowledge, hence increasing creativity and productivity.
AMD Ryzen AI Max+ 395 CPU surpasses the IBM Granite Vision 3.2 3b, Google Gemma 3 4b, and Google Gemma 3 12b by up to 7x, 4.6x, and 6x, respectively. The ASUS ROG Flow Z13, with 64GB of memory, can run the cutting-edge Google Gemma 3 27B Vision model.
The Ryzen AI Max+ 395 CPU is expected to surpass the IBM Granite Vision 3.2 3b, Google Gemma 3 4b, and Google Gemma 3 12b by up to 7x, 4.6x, and 6x, respectively, while the ASUS ROG Flow Z13 can handle the current model.
AMD Versal AI Edge adaptive SoC
The AMD Versal AI Edge XQRVE2302 is the second radiation-tolerant device in the space-grade (XQR) Versal adaptive SoC range to receive Class B spaceflight qualification. The devices provide rapid AI inferencing to space via inbuilt AMD AI Engines (AIE) optimised for machine learning workloads, known as AIE-ML. These compute engines improve support for data types commonly used in AI inference, providing 2X the INT8 and 16X the BFLOAT16 performance with lower latency than first-generation AI Engines.
The XQRVE2302 devices provide powerful computing in a compact form factor (23mm x 23mm container), making them the first adaptive SoC for space applications to be available in such a small size. They include a dual-core Arm Cortex-A72 application processor and a dual-core Arm Cortex-R5F real-time processor, as well as AIE-ML, DSP blocks, and FPGA programmable logic.
Also Read: Samsung Galaxy S25 Edge European pricing leaked ahead of April launch
Telemetry anomaly detection, wildfire detection, vegetation and crop identification, and cloud detection are all examples of space-based AI inference applications. Alpha Data, a market leader in FPGA-based acceleration boards, has just unveiled a radiation-tolerant reference design for the AMD Versal AI Edge XQRVE2302 adaptive SoCs, allowing for fast, cost-effective development and prototyping for space applications.