Elon Musk's xAI Corp. has successfully assembled an AI training system named Colossus, featuring 100,000 Nvidia graphics cards. Musk announced this milestone in a recent X post, stating that Colossus is now operational and claiming it to be the "most powerful AI training system in the world."
This initiative follows Musk's establishment of xAI last year to compete against OpenAI, amid ongoing legal disputes over alleged contract breaches. In May, xAI raised $6 billion at a valuation of $24 billion to support its AI development, which includes creating large language models known as Grok. Colossus purportedly surpasses the performance of the U.S. Energy Department’s Aurora system, the current leading AI supercomputer.
Equipped with Nvidia H100 GPUs, Colossus promises advancements in speed and capabilities, with the H100 being the most powerful AI processor prior to the H200's release. Musk plans to double the number of chips in Colossus to 200,000, including 50,000 H200 chips that offer faster data transfer capabilities, enhancing AI model performance. Colossus could enable the development of more advanced language models, following Grok-2, which was trained on only 15,000 GPUs. Some of Colossus’ components may have originally been intended for Tesla, as Musk reportedly redirected Nvidia resources from the carmaker to bolster xAI. With this new infrastructure, xAI aims to release a successor to Grok-2 by the end of the year.