Here's What AMD Is Up Against in the A.I. Market

Advanced Micro Devices (NASDAQ: AMD) made headlines on Monday with the introduction of Radeon Instinct, a line of GPU accelerators aimed specifically at deep learning systems. The company has offered products targeting data centers and high-performance computing for years with its FirePro graphics cards. But Radeon Instinct, along with the MIOpen open-source software that AMD announced alongside the new products, marks a shift in the company's strategy.

Image source: AMD.

Radeon Instinct will come in three flavors. The MI6 is based on AMD's Polaris GPU architecture, which powers the mainstream gaming graphics cards the company launched earlier this year. This card is aimed at neural network inference, where a trained neural network is used for an application. The MI8 is based on AMD's old Fiji architecture, featuring greater processing power and memory bandwidth, but a quarter of the memory. This card is also likely aimed at inference.

The MI25 is the big daddy of the bunch. Built around AMD's upcoming Vega GPU architecture, which will power the company's high-end gaming graphics cards next year, the MI25 offers a whopping 25 TFLOPs of half-precision floating-point performance. This card is aimed at neural network training, a more computationally intensive process than inference. For comparison, NVIDIA's (NASDAQ: NVDA) P100 Tesla GPU clocks in at 21.2 TFLOPS in the best-case scenario, using the company's homegrown NVLink technology instead of standard PCIe.

Radeon Instinct products are expected to begin shipping during the first half of 2017. This news comes about one month after AMD scored a big data center win, with Alphabet's (NASDAQ: GOOG)(NASDAQ: GOOGL) Google announcing that it would integrate the company's FirePro graphics cards into its cloud platform. Despite this apparent momentum, AMD remains far behind NVIDIA in the data center market generally and the A.I. acceleration market specifically. With additional competition from Intel (NASDAQ: INTC)and even Google itself, it won't be smooth sailing for AMD.

NVIDIA's dominance

NVIDIA has been talking about deep learning for years. The company's data center business is booming thanks to that early focus, closing in on a $1 billion annual run rate after tripling revenue year-over-year during the third quarter.

Image source: NVIDIA.

NVIDIA's Tesla GPUs are already offered by most of the major cloud platforms, including Amazon Web Services, Microsoft Azure, and IBM Cloud. AMD's GPUs are absent from all three. Google will be adding NVIDIA's GPUs to its cloud platform along with AMD's, so while that announcement was a big win for AMD, it was also a win for NVIDIA.

According to an analyst at Nomura, NVIDIA enjoys an 80% share of the accelerator market. A vast amount of software has been ported to run on NVIDIA's GPUs, ranging from computational finance to scientific simulations, creating switching costs for those already using its products. The market for A.I. acceleration is still in its infancy, meaning that AMD is not incapable of catching up. But NVIDIA has a clear lead.

Intel and Google

Intel has been trying to break into the accelerator market for years with its Xeon Phi line of products. The most recent, Knights Landing, has the advantage of acting as the main processor as well, offering customers potential costs savings. Intel stated earlier this year that it was aiming to move 100,000 units in 2016, a number that would make the product a major force in the accelerator market.

Intel is planning to launch a version of Xeon Phi, Knights Mill, aimed specifically at deep learning. Knights Mill will also feature the ability to act as the main processor when it launches next year, giving NVIDIA some serious competition in the A.I. acceleration market in addition to AMD's upcoming products.

Image source: Intel.

Google may pose a threat to all three companies in the long run. The search giant announced earlier this year that it had designed an application-specific-integrated-circuit in an effort to accelerate deep learnings tasks. Google is reportedly using more than 1,000 of the Tensor Processing Units in its data centers already.

In the same way that a GPU is more efficient than a CPU at graphics processing because it's designed at the hardware level for the kind of math involved, Google's TPU is more efficient than a GPU for inference because it's designed at the hardware level for that task. Google claims that its TPUs provide and order-of-magnitude improvement in performance per watt compared to GPUs.

Late to a party that could end early

AMD is playing catch up when it comes to deep learning. Like NVIDIA, AMD is betting that GPUs are the future of A.I. acceleration. But ASICs like Google's TPU could upend both companies' long-term plans. If other cloud computing vendors move to design their own custom chips for deep learning, the GPU vendors could be left out in the cold.

AMD's Radeon Instinct is the company's first real attempt at going after NVIDIA in the A.I. acceleration market. If the company can price the cards competitively, it could be a very profitable endeavor. But the threat of Intel is looming, and the potential for custom chips to eventually displace GPUs is very real. AMD investors should be happy that the company is taking deep learning seriously. But they also shouldn't get ahead of themselves.

10 stocks we like better than Advanced Micro Devices When investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has tripled the market.*

David and Tom just revealed what they believe are the 10 best stocks for investors to buy right now... and Advanced Micro Devices wasn't one of them! That's right -- they think these 10 stocks are even better buys.

Click here to learn about these picks!

*Stock Advisor returns as of Nov. 7, 2016

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Timothy Green has no position in any stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and Nvidia. The Motley Fool recommends Intel. Try any of our Foolish newsletter services free for 30 days. We Fools may not all hold the same opinions, but we all believe that considering a diverse range of insights makes us better investors. The Motley Fool has a disclosure policy.