A new report claims that Microsoft is developing its own AI chips that can be used to train large language models, thereby reducing reliance on NVIDIA chips.
According to the report, published by The Information website, Microsoft has been working on chip development in secret since 2019 and is now available for some of its employees and OpenAI employees to test their performance in the latest large language models. like like: (G).PT-4) GPT-4.
NVIDIA is the leading supplier of AI server chips that companies are racing to buy, and OpenAI is estimated to need 30,000 NVIDIA A100 GPUs to commercialize ChatGPT.
Nvidia’s latest H100 GPUs are selling for over $40,000 on eBay, demonstrating the demand for cutting-edge chips that can help popularize AI software.
And while NVIDIA is rushing to produce as many chips as possible to meet demand, Microsoft is said to be looking into developing its own chips, which it hopes will help it save money in its push to adopt AI in its services.
Microsoft is said to have accelerated work on codenamed Athena, a project to make its own AI chips, which it hopes will help it save money on its drive to adopt AI in its services.
While it is unknown whether Microsoft intends to supply the chips to customers of its ‘Azure’ cloud service, the US software giant is said to be planning to make its AI chips more widely available within Microsoft and (OpenAI) OpenAI early next year.
According to the report, Microsoft also has a roadmap for chipsets that include several generations to come.