Qualcomm Takes Another Crack at the Data Center with an AI Chip

Mobile chip giant Qualcomm (NASDAQ: QCOM) has been trying to insert itself into the data center market for quite a while. The company unveiled its Centriq family of ARM-based server CPUs back in 2017, aimed at stealing away market share from Intel, which totally dominates that niche. But it underestimated how difficult it would be to sell customers on a chip built on a different architecture than the industry standard. Qualcomm hasn't entirely abandoned the effort, but its ambitions have been greatly scaled back.

Qualcomm's server CPUs may never find much success, but the growth of artificial intelligence has created a vast market for specialized chips tailor-made for accelerating AI workloads. Graphics chip company NVIDIA (NASDAQ: NVDA), for example, has grown its data center segment into a business with around $3 billion of revenue annually. NVIDIA's GPUs are used in a wide variety of additional applications, but AI has been the company's focus in recent years.

NVIDIA isn't the only company building AI chips. Alphabet's Google is already working on the third generation of its Tensor Processing Unit, an AI chip that it uses in its own data centers. Intel is working on various initiatives, including graphics cards and more exotic processors. And various start-ups are working on AI chips of their own. NVIDIA was an early mover, but the AI chip market is turning into a full-blown gold rush.

You can now add Qualcomm to the list of companies pursuing this opportunity. On April 9, it announced the Qualcomm Cloud AI 100, a chip designed to accelerate AI inference processing in the cloud. The chip will start sampling to customers in the second half of this year, with a launch likely happening sometime in 2020.

Big promises

Qualcomm is targeting inference, which is the process of using an already-trained system on new data. For example, an AI system for identifying objects in images first needs to be trained on a large set of tagged images. That training process is computationally intensive, and involves a large amount of data. Once the training is complete, the system can be used to identify objects in brand new images that weren't involved in the training process. This inference process is still computationally intensive, but less so than training.

Because inference is computationally intensive, specialized chips designed at the hardware level for the task can be drastically more efficient than general-purpose processors like CPUs or even GPUs. Qualcomm claims that its AI 100 chip will provide a tenfold improvement in performance per watt compared to "the industry's most advanced AI inference solutions deployed today." The company is likely referring to GPUs when making that comparison.

The AI 100 will be built on a 7nm process node, likely from TSMC, and it will support popular software like PyTorch, Glow, TensorFlow, Keras, and ONNX.

A threat to NVIDIA

NVIDIA has had a lot of success selling GPUs for AI acceleration, but GPUs are still more general-purpose than chips like the AI 100 designed for a single application. The performance claims Qualcomm is making aren't outlandish, given that the AI 100 is likely similar to Google's TPU. Qualcomm's expertise lies in power-sipping mobile processors, so it could have an advantage when it comes to power efficiency.

Of course, Qualcomm still needs to get this chip launched and convince data center customers to choose it over the alternatives. NVIDIA is the market leader, and GPUs have become the standard solution. There are switching costs for customers who've already deployed a bunch of NVIDIA GPUs, so winning market share will take time.

But NVIDIA should certainly be worried about an onslaught of competition, especially given that sales in its data center segment are already slowing. GPUs aren't the be all and end all of AI acceleration, so the company's dominance may not last.

Given the meager details Qualcomm has provided so far, it's impossible to predict whether the AI 100 will be a success. But if the company can deliver on its performance promises, it could carve out a meaningful chunk of the AI acceleration market for itself.

10 stocks we like better than QualcommWhen investing geniuses David and Tom Gardner have a stock tip, it can pay to listen. After all, the newsletter they have run for over a decade, Motley Fool Stock Advisor, has quadrupled the market.*

David and Tom just revealed what they believe are the ten best stocks for investors to buy right now... and Qualcomm wasn't one of them! That's right -- they think these 10 stocks are even better buys.

See the 10 stocks

*Stock Advisor returns as of March 1, 2019

Suzanne Frey, an executive at Alphabet, is a member of The Motley Fool's board of directors. Timothy Green has no position in any of the stocks mentioned. The Motley Fool owns shares of and recommends Alphabet (A shares), Alphabet (C shares), and NVIDIA. The Motley Fool owns shares of Qualcomm. The Motley Fool recommends Taiwan Semiconductor Manufacturing. The Motley Fool has a disclosure policy.