Google has strengthened its position in the global artificial intelligence infrastructure race after signing a multibillion-dollar cloud computing deal with Thinking Machines Lab, the AI startup founded by former OpenAI CTO Mira Murati.
The agreement will see Thinking Machines Lab rely on Google Cloud to power its AI systems, using advanced infrastructure built around Nvidia’s latest GB300 chips. The deal highlights the increasing demand for high performance computing capacity as AI companies race to develop and deploy more powerful models at scale. (techcrunch.com)
Thinking Machines Lab, which has quickly become one of the most closely watched new players in the AI space, focuses on building advanced AI systems aimed at improving reasoning, reliability, and general intelligence capabilities. The partnership gives the company access to massive computing power, which is essential for training and running frontier AI models.
For Google Cloud, the deal is another major win in its effort to compete with Amazon Web Services and Microsoft Azure in the high stakes cloud computing market. By securing large scale AI startups as customers, Google strengthens its position as a key infrastructure provider for next generation AI development.

The use of Nvidia’s GB300 chips is also significant. These chips represent some of the most advanced AI accelerators available, designed specifically for large scale model training and inference workloads. Their integration into Google Cloud’s infrastructure underscores the deep dependency between AI companies and specialised chipmakers like Nvidia, which remains dominant in the AI hardware market.
The deal also reflects a broader trend in the industry: AI startups are no longer just software companies, they are becoming heavily infrastructure dependent. Training cutting edge models requires enormous computing resources, often costing hundreds of millions of dollars in cloud and hardware usage.
This has led to a growing ecosystem where cloud providers, chip manufacturers, and AI developers are increasingly interconnected. Companies like Google, Microsoft, and Amazon are not only competing to build AI tools but also to provide the underlying infrastructure that powers them.
For Thinking Machines Lab, the partnership provides both scale and stability. Access to Google Cloud’s global infrastructure allows the startup to focus on research and model development without being constrained by hardware limitations.

Mira Murati’s involvement also adds weight to the deal. As a former OpenAI executive, she brings significant experience in building large scale AI systems, and her new venture is widely seen as a potential competitor in the next wave of foundational AI models.
Industry analysts say the agreement reflects how competitive the AI infrastructure market has become. Rather than building their own computing systems, many AI companies are choosing to partner with cloud providers that can deliver immediate access to massive GPU clusters.
At the same time, the deal reinforces Nvidia’s central role in the AI ecosystem. Even as cloud providers compete for dominance, they all rely on Nvidia’s chips to power their AI workloads, giving the company a strategic position at the heart of the industry.
The partnership also signals that the next phase of AI development will be defined not just by model innovation, but by access to computing power. Companies that can secure large scale infrastructure deals are likely to gain a significant advantage in building more advanced and capable AI systems.
In this context, Google’s agreement with Thinking Machines Lab is more than just a cloud deal. It is a strategic move in the ongoing global competition to control the infrastructure layer of artificial intelligence.