With a systems approach to chips, Microsoft aims to tailor everything ‘from silicon to service’ to meet AI demand

A man and a woman work to install a row of servers with colorful wires and lights.

Tucked away on Microsoft’s Redmond campus is a lab full of machines probing the basic building block of the digital age: Silicon. This multi-step process meticulously tests the silicon, in a method Microsoft engineers have been refining in secret for years.

Today at Microsoft Ignite the company unveiled two custom-designed chips and integrated systems that resulted from that journey: the Microsoft Azure Maia AI Accelerator, optimized for artificial intelligence (AI) tasks and generative AI, and the Microsoft Azure Cobalt CPU, an Arm-based processor tailored to run general purpose compute workloads on the Microsoft Cloud.

The chips represent a last puzzle piece for Microsoft to deliver infrastructure systems – which include everything from silicon choices, software and servers to racks and cooling systems – that have been designed from top to bottom and can be optimized with internal and customer workloads in mind.

The chips will start to roll out early next year to Microsoft’s datacenters, initially powering the company’s services such as Microsoft Copilot or Azure OpenAI Service. They will join an expanding range of products from industry partners to help meet the exploding demand for efficient, scalable and sustainable compute power and the needs of customers eager to take advantage of the latest cloud and AI breakthroughs.