ACT
Stock Search£º
News

Following OpenAI, another AI company is expected to embark on the path of developing its own chips.

Date Time£º2026.04.13
Recent media reports indicate that the artificial intelligence company Anthropic is evaluating the possibility of developing its own AI chips in order to alleviate the current situation of tight supply and high costs of AI chips. Although this plan is still in its early stages, this trend reflects a broader industry trend: from Google, Amazon to Meta, Microsoft, and even OpenAI and Anthropic, major tech companies are launching a "arms race" centered around independent computing power. Developing chips on one's own is no longer just a technological gamble; it is a strategic must for reducing reliance, optimizing costs, and building long-term barriers.

Computing power has significantly increased, and Anthropic may develop its own AI chips.

Anthropic is an American startup company specializing in artificial intelligence security and research. It was founded in 2021 and its headquarters are located in San Francisco. Its main products include the Claude model family. Through partnerships with platforms such as Amazon AWS and Google Cloud, Anthropic provides AI model integration services to enterprises in the fields of healthcare, finance, and technology. In September 2025, the company completed its Series F financing of 13 billion US dollars, with a valuation of 183 billion US dollars. In February 2026, the company completed its Series G financing, with a valuation of 380 billion US dollars. 
As the commercialization process of the Claude model accelerates, Anthropic's computing power demand has sharply increased. Recently, the company has reached a 3.5 gigawatt computing power cooperation agreement with Google and Broadcom, but the stability of chip supply still faces challenges. 
Insiders have disclosed that Anthropic's self-developed chip project is currently in the early evaluation stage and no dedicated team has been formed yet. Ultimately, it may also choose to continue purchasing external chips.

Industry experts believe that if the self-developed plan is implemented, Anthropic may adopt a model similar to that of Broadcom, customizing the TPU chip architecture based on the characteristics of the Claude model, aiming to achieve an energy efficiency ratio that is 3-5 times higher than that of a general-purpose GPU. The technical route will rely on the contract manufacturing system of Google or Amazon, and does not involve an independent manufacturing process for the time being.

Self-developed chips have been adopted by Google, Amazon, Meta, Microsoft and OpenAI, among others.

From the perspective of technical routes, each company has its own focus: Google has been deeply involved in the TPU series for over a decade, and the latest TPU v7p has been adapted for the multimodal training of the Gemini large model; Amazon's Trainium series focuses on AI training and the upcoming v3 version will integrate HBM memory, doubling the bandwidth; Meta's MTIA series mainly targets inference scenarios, v2 has been mass-produced, and v3 is expected to be launched in 2026; Microsoft's Maia series has been delayed, but it is still being continuously developed; OpenAI has joined forces with Broadcom and TSMC, planning to deploy the first 3-nanometer process inference chip in the second half of 2026, with a single chip capable of supporting 10 gigawatts of computing power. 
It is clear that both cloud providers and AI companies are trying to break away from their sole reliance on Nvidia's general-purpose GPUs, and instead are opting for customized chips to achieve better energy efficiency and more controllable supply chains.

Three factors drive the tech giants' rush towards self-developing chips

From Anthropic's exploratory move, to OpenAI's 3-nanometer custom chips, and then to the continuous iterations by Google and Amazon - this battle for computing autonomy seems like each company is acting independently. However, there is a common logic behind it. Why have tech giants all abandoned the "buying spree" and instead invested heavily in self-developed chips? Industry experts believe that the main reasons are three factors. 
Over the past few years, Nvidia GPUs have seen their prices soar due to a shortage of supply, and the supply has been extremely unstable. For AI companies that handle massive inference requests every day, the marginal cost of each query is crucial to their business model. Self-developed chips can eliminate redundant functions in general-purpose chips and optimize for their own models to the fullest extent. 
Cost is not the only factor. Supply chain security and changes in international circumstances have increased the uncertainty of chip supply. For technology giants, once key chips are "cut off", it may affect the entire business system. Although self-developed chips do not mean complete self-production, at least they have alternative solutions and bargaining power at the design level. 
The deeper reason lies in the fact that chips are becoming the technological high ground in the AI competition. 
Deep collaboration between software and hardware is necessary to unleash the maximum potential of algorithms. Examples such as the adaptation of Google's TPU and Gemini, and OpenAI's optimization of chip design based on its own model, demonstrate that whoever can better integrate algorithms with chips will be able to establish a gap in terms of inference speed and energy efficiency. Self-developed chips are not only a hardware project but also a crucial step in building technological barriers, creating differentiated experiences, and leading the future ecosystem. 
In conclusion, the self-developed chips by technology giants are not the result of a sudden impulse, but rather a product of cost pressure, supply chain risks, and technological ambition. When unicorn companies like Anthropic also start seriously considering "chip manufacturing", this wave of autonomous computing power may just be getting started.