

Chipmakers such as Advanced Micro Devices Inc. are moving rapidly to supply processors for the edge computing market.
The company announced two new chip lineups this week designed to power edge computing devices such as subsystems in smart automobiles. The latest AMD processors have circuits built for specific tasks, including running artificial intelligence models. AMD’s new chips are based on field programming gate array technology, further evidence that the chip industry will continue to offer alternatives to GPU processing as AI use cases expand.
“There’s a lot of buzz around AI being run on large language models, that generative AI needs to be processed in a GPU,” said Suresh Andani (pictured), senior director of cloud product management at AMD. “But the reality is there’s a set of workloads where the dataset sizes are so large that literally you do need offload accelerators. A lot of that can be run very efficiently on CPUs using CPU inferencing, especially for edge use cases.”
Andani spoke with theCUBE Research analysts Rob Strechay and Rebecca Knight at Google Cloud Next 2024, during an exclusive broadcast on theCUBE, SiliconANGLE Media’s livestreaming studio. They discussed key trends AMD is seeing as it deploys processors for the AI market and its ongoing work with major hyperscalers such as Google Cloud. (* Disclosure below.)
As data volumes have grown significantly, the ability to move information to be processed in AI models has become more challenging. AMD has seen customers gravitate toward applying AI to data stores, which is fueling CPU usage as well.
“It’s much easier to bring AI to the data through the same CPU,” Andani said. “Bringing AI to the data is a motion we are seeing because the data is already in the cloud … you’re already using the same CPU-based hardware for running your foundational workloads. Just infuse AI into it.”
AMD has played a key role with hyperscalers, such as Google Cloud, by supplying the latest generation EPYC Genoa and EPYC Milan processors to drive virtual machines. The goal is to provide high-performance computing solutions tailored to meet a wide range of workloads.
“As we innovate as AMD and bring more and more compute power to the cloud, what you can do with that data is a lot more than ever before,” Andani said. “New use cases are being enabled with advancements in compute power and you want to do it at very low latency. The number of cores we pack in, the amount of IO and memory bandwidth we provide that our cloud partners leverage makes it possible for smart engineers in these customer companies to leverage that.”
AMD has also been active in the open-source community to provide a greater level of self-sufficiency for GPUs at the software level. Earlier this month, the company confirmed that it would open source its Micro Engine Scheduler firmware for Radeon GPUs.
“Open sourcing of our software is a huge, huge advantage, at least in that we believe in what we are hearing from our top tier one customers, because it’s impossible to have all the AI experts under one roof,” Andani said. “That’s a huge tailwind for us that our customers are really appreciating.”
Here’s the complete video interview, part of SiliconANGLE’s and theCUBE Research’s coverage of Google Cloud Next 2024:
(* Disclosure: Advanced Micro Devices Inc. sponsored this segment of theCUBE. Neither AMD nor other sponsors have editorial control over content on theCUBE or SiliconANGLE.)
THANK YOU