AI Infrastructure Is Not One-Size-Fits-All and Why That Matters
AI infrastructure isn’t one-size-fits-all—and assuming it is could cost you performance, security, and time to market. AMD’s Ravi Kuppuswamy shares how enterprises are balancing on-prem and cloud, securing massive data flows and adapting to smaller, distributed AI models. Learn why flexibility, standards, and data-center CPUs are critical to AI time to market and performance at scale.
Webinar Recording
Presentation Materials
Additional Notes
Key Moments
- Flex architecture (2:32)
- Security + standardization (3:35)
- Agentic AI and CPU fit (8:58)
Jump to what matters to you
Timestamp | Title | Description |
---|---|---|
01:25 | AI Workloads: It’s Not Just LLMs | Small models, edge AI, and why the “one size” myth is dangerous. |
02:32 | Flex Out, Flex In: Hybrid Deployment Models | Why enterprises are reconsidering where AI runs. |
03:35 | Data Integrity & Security in AI | How to secure massive data flows across environments. |
04:59 | Standardization + Kubernetes | Replicating environments across cloud and on-prem. |
06:52 | Modernizing the Data Center | Replacing legacy CPUs with EPYC: sustainability and scale benefits. |
08:58 | Agentic AI + CPU Efficiency | Why CPUs are well-suited for smaller, distributed models. |
10:28 | Scaling with Open Standards (UAL) | Open accelerator links and AMD’s multi-element architecture. |
11:55 | The Helios Platform | How AMD combines CPU, GPU, NPU, and networking into a total AI solution. |