xAI’s Colossus World’s Most Powerful AI Supercomputer Grows 10 Times Stronger

The Financial Times reports that Elon Musk’s AI startup xAI plans to expand its supercomputer, Colossus, tenfold to exceed one million GPUs, aiming to close the gap with competitors like Google, OpenAI, and Anthropic.

Colossus, built just three months earlier this year, currently comprises an array of 100,000 Nvidia AI accelerators, making it the world’s largest supercomputer by this metric. It is used to train large language models (LLMs) for xAI’s chatbot Grok, which currently has fewer users compared to OpenAI’s ChatGPT and Google’s Gemini.

The project to expand the Memphis, Tennessee facility by tenfold is already underway, with support from Nvidia, Dell, and Supermicro. A “dedicated task force” has been formed to oversee the initiative.

The race to secure AI accelerators or access to data centers has intensified as training and operating LLMs demand immense computing power. For example, OpenAI has received over $14 billion in funding from Microsoft, while Anthropic, the creator of the Claude chatbot, has secured $8 billion from Amazon. The e-commerce giant also committed to providing its partners access to a new cluster featuring more than 100,000 proprietary AI accelerators.

In contrast, Musk opted not to partner with established companies but to build xAI from scratch. Recently, the company raised an additional $5 billion, valuing it at $45 billion. This positions xAI as a rival to OpenAI, which Musk co-founded in 2015 but later sued to prevent its transformation into a for-profit enterprise.

Nvidia CEO Jensen Huang previously noted that facilities like Colossus typically take three years to build, not three months.

“We’re not just leading; we’re accelerating progress at unprecedented speeds while ensuring power grid stability with Megapack technology.”

Said Brent Mayo, xAI’s Senior Manager for Construction and Infrastructure, responding to allegations of permit manipulation.

Scroll to Top