Intel Corp. on Tuesday said Facebook Inc. is providing technical input for a coming chip specially designed for artificial intelligence, as the semiconductor giant moves to capitalize on a fast-growing market and aims a direct shot at rival Nvidia Corp.
Intel's new chip will be among the first of a new breed of processors designed from the ground up to accelerate the popular AI technique known as deep learning, which enables computers to recognize objects in photos, words in spoken statements, and other features that otherwise would require human judgment.
Called the Nervana Neural Network Processor, the chip is the fruit of Intel's acquisition last year of startup Nervana Systems. Intel expects to ship the initial version on a limited basis later this year and make it widely available next year through Intel Nervana Cloud, a cloud-computing service, and as an appliance that customers can install in their own data centers.
"We are thrilled to have Facebook in collaboration sharing their technical insights as we bring this new generation of AI hardware to market," said Intel CEO Brian Krzanich in a statement.
Intel said it is working with a select group of companies to fine-tune the chip. The company said the technology could contribute to advances in medical diagnoses, financial fraud detection, weather prediction, self-driving cars and other areas.
Estimates vary widely on the potential size of the market for AI-specific hardware in data centers. Karl Freund, an analyst at Moor Insights & Strategy, estimates the market is worth at least $500 million this year and could grow to as much as $9 billion by 2020.
Nvidia serves that market virtually single-handedly. Its chips were designed to process graphics but proved more efficient in some deep-learning tasks than Intel's conventional processors, such as its Xeon line.
Deep learning has emerged as an effective way for computers to find useful information in the floods of data washing over the internet and corporate networks, especially imagery, sounds, documents, and other data that isn't in strictly organized formats, such as spreadsheets and databases. However, it requires huge quantities of computing power to process immense stores of data.
With deep learning, computers study large volumes of test data for patterns, in a phase called training, and then apply what they've learned to make decisions about new data.
The Nervana NNP is designed to speed up the training phase by taking shortcuts specific to neural networks, the software structures that drive deep learning. For instance, training calculations can occur at low precision, saving processing power for further calculations. The chip is also designed to be ganged, so large numbers of NNPs can work together on a single task.
Intel declined to provide metrics for evaluating the performance of the new chip. Intel late last year announced its goal to boost by 100 times the speed of training achieved by graphics processors. However, Nvidia itself since then has multiplied the speed of its own chips.
"Intel will be competitive but is unlikely to have a huge advantage," Mr. Freund said.
Several other companies are working on chips designed to accelerate AI tasks. For example, Alphabet Inc.'s Google division has introduced two generations of AI chips it calls Tensor Processing Units, or TPUs, for use in Google's own data centers.
Write to Ted Greenwald at Ted.Greenwald@wsj.com
(END) Dow Jones Newswires
October 17, 2017 14:14 ET (18:14 GMT)