Google splits its AI chip in two, and Nvidia hears the door click
AnalysisGoogle used its Cloud Next event on April 22 to announce two new custom AI chips, TPU 8t for training models and TPU 8i for running them (inference, the cost of actually serving a model to a user). The numbers are the point: 2.8 times the training performance of last year's Ironwood chip at the same price, and 80 percent better performance per dollar on inference. Splitting training and serving into separate silicon is the move a company makes when inference has become the real bill. Combine this with Anthropic's expanded multi-gigawatt TPU order this week, and Google has quietly built the only credible second source for frontier compute. Nvidia still owns the category. It no longer owns the roadmap.