The CPU is the Heart of AI Infrastructure

For enterprise architects building AI infrastructure, overlooking the CPU could be your biggest blind spot. As enterprises race to adopt AI, the spotlight often falls on GPUs and accelerators. But the CPU remains the unsung hero. In this conversation, AMD’s Ravi Kuppuswamy explains why he believes AMD EPYC CPUs are foundational to modern AI infrastructure due to cost-effective inference and coordination of workloads across edge, cloud, and data center.

Webinar Recording

Additional Notes

Key Moments

  • Model-size strategy (5:30)
  • CPU roles in inference (7:28)
  • Venice + Helios architecture (13:00)

Jump to what matters to you:

Timestamp Title Description
01:27 AI Adoption Speed & Impact Enterprises are moving fast—why agile infrastructure matters.
02:27 Why the CPU Still Matters in AI The CPU remains central to compute coordination and adaptability.
04:18 What Makes EPYC Unique Performance, core count, and bandwidth: a breakdown of why EPYC leads.
05:32 Agentic AI & the Morphing Definition of AI Ravi explains how AI’s changing shape affects hardware strategy.
07:28 Inference vs Training: CPUs Dual Role A technical look at how CPUs support small and large model tasks.
09:00 Core Count, Memory, and IO for AI Workloads Tuning infrastructure to workload diversity.
11:05 Scalability Through Chiplet Design AMD’s chiplet architecture enables right-sizing across use cases.
13:03 What’s New: Venice and Helios Platform The next-gen Zen cores and a total AI solution architecture.
15:13 Openness as a Cultural Advantage AMD’s philosophy of open standards and collaborative innovation.

Want to discuss AMD EPYC for your AI infrastructure?