OpenAI, the organization behind ChatGPT, is considering the development of its own artificial intelligence (AI) chips and may even pursue the acquisition of a company to facilitate this endeavor. According to sources cited by Reuters, internal discussions suggest that OpenAI aims to strengthen its hardware supply chain amid the increasing global demand for chips that are essential for training AI models.
Currently, OpenAI relies heavily on Nvidia for its chip supply, specifically utilizing Nvidia's powerful chips to fuel its operations. Since 2020, these chips have been integral to the Microsoft supercomputer that OpenAI uses to train its advanced AI models. Notably, Nvidia has dominated the AI chip market, shipping an impressive 900 tons of its flagship H100 unit in the second quarter, as reported by Omdia research.
The urgency for OpenAI to explore alternative options has been underscored by CEO Sam Altman’s public expressions of concern regarding GPU availability. In a now-deleted blog post detailing a discussion with Raza Habib, CEO of London-based AI firm HumanLoop, Altman attributed reliability and speed issues of OpenAI’s API to "GPU constraints." The scarcity of GPUs not only hampers accessibility but also drives up operational costs; OpenAI is said to spend hundreds of thousands of dollars daily to maintain and run its ChatGPT services.
While no official confirmation has been given regarding the decision to pursue its chip development plans, the potential move reflects a strategic effort to mitigate current supply chain challenges.
In a related development, Microsoft has been working on its own custom chips branded as Athena, with a dedicated team of approximately 300 engineers. These chips are expected to be utilized by both Microsoft and OpenAI as early as next year. Additionally, rival company Meta is similarly engaged in building custom chips for its internal AI models, such as Llama 2, utilizing the MITA chips which are optimized specifically for its operational needs.
As the landscape for AI technology evolves, companies are racing to enhance their infrastructure capabilities, paving the way for future advancements in AI applications.