Facebook, Microsoft Roll Out New AI Servers

To accommodate the immense processing needs of their artificial intelligence projects, Facebook and Microsoft are upgrading their data centers with more powerful GPUs.

Facebook on Wednesday announced a top-to-bottom refresh of the hardware that powers its worldwide data centers. The company's new AI processing platform is nicknamed Big Basin, and it can train advanced neural networks that are 30 percent larger than the maximum capacity of its Big Sur predecessor.

At the heart of each Big Basin server are eight Nvidia Tesla P100 GPU accelerators using Nvidia's latest Pascal architecture and 16GB of memory. Compared to the Maxwell-based GPUs and 12GB of memory of Big Sur, Big Basin's spec boost allows Facebook engineers to get better performance for each watt of energy consumed.

Facebook is obsessed with increasing the power efficiency of its data centers, which are typically located in cool, dry climates and recycle the server exhaust to regulate the temperature of their buildings.

Meanwhile, each of Microsoft's new AI servers also include eight Tesla P100 GPUs, which the company is rolling out as part of its Project Olympus data center overhaul.

Both the Microsoft and Facebook servers are part of the Open Compute Project, allowing other companies to copy and modify their design.

As the world's largest social network, Facebook also has immense storage needs, so in addition to new AI machines it is also rolling out upgraded storage servers that the company designed in-house. They're "like a tub," according to Facebook engineering manager Eran Tal, who explained that their vertical drive configuration makes them more thermally efficient.

This article originally appeared on PCMag.com.