AI Field Notes by Michael Nemtsev

Issue #5

Towering compute infrastructure looms over displaced workers below, their scattered tools suggesting labor displacement while safety structures remain visibly fragile beneath the system's scale.

The AI industry is openly admitting what used to stay buried: the real money is in building the infrastructure underneath, not the models on top. Google is splitting its custom chips to undercut Nvidia on inference costs, Anthropic locked in a decade of power and silicon from Google, and Bezos is betting 10 billion dollars on physical-world AI because language models are becoming a cost war. China is already shipping more humanoid robots than the US because they have the factories and policy support to manufacture at scale, while US companies are still filming gig workers doing household chores to generate training data. The gap between what is technically possible and what is actually deployable is collapsing fast, and the winners are the ones with chips, power grids, factories, and capital to play for a decade. If you have been putting off a conversation about which parts of your job an AI tool could do next year, this is the week to stop deferring.

Google splits its AI chip in two, and Nvidia hears the door click

AnalysisGoogle used its Cloud Next event on April 22 to announce two new custom AI chips, TPU 8t for training models and TPU 8i for running them (inference, the cost of actually serving a model to a user). The numbers are the point: 2.8 times the training performance of last year's Ironwood chip at the same price, and 80 percent better performance per dollar on inference. Splitting training and serving into separate silicon is the move a company makes when inference has become the real bill. Combine this with Anthropic's expanded multi-gigawatt TPU order this week, and Google has quietly built the only credible second source for frontier compute. Nvidia still owns the category. It no longer owns the roadmap.

Industry ·Anthropic

Anthropic bets its next model on someone else's chips

AnalysisAnthropic (the AI lab behind Claude) confirmed this week that its expanded deal with Google and Broadcom (the chip designer behind Google's TPUs) will deliver multiple gigawatts of custom tensor processing unit capacity starting in 2027, with 3.5 gigawatts already penciled in. For context, a gigawatt is roughly what a mid-size US city pulls at peak. A lab that sold itself on safety is now locking in a decade of electricity and silicon from a rival cloud. Nvidia hardware stays in the mix, which is the tell. Frontier labs no longer trust any single supplier with their training runs, and the capex required to keep up has moved past what equity rounds alone can float. Compute is the constraint. Everything else is commentary.

AgentsIndustry ·TechCrunch

Musk buys an option on the coding layer

AnalysisCursor, the AI coding startup, was hours from closing a $2 billion round at a $50 billion valuation when SpaceX walked in with a different deal. Per filings disclosed April 22, SpaceX paid a $10 billion collaboration fee now and took an option to buy the company outright for $60 billion later this year. SpaceX already owns xAI (Elon Musk's AI lab, maker of the Grok chatbot), so the rocket company is now the holding structure for a coding agent, a foundation model lab, and a satellite network. Call it vertical integration, or call it a founder using one cash machine to bankroll the next. Either way, the AI coding tool that roughly three million developers use at work may soon report to the same person as Starlink. That is a governance question dressed as a term sheet.

Agents ·9to5Mac

OpenAI reinvents Microsoft Recall and hopes nobody notices

AnalysisOn April 21 OpenAI quietly added Chronicle to its Codex desktop app, a background feature that takes continuous screenshots of the user's computer, extracts context from them, and builds persistent memory so the coding agent understands what you are working on. Microsoft tried this in 2024 with Recall and got flayed by security researchers until it pulled the feature. OpenAI's framing is gentler (the captures live on device, the memories are opt-in) and the backlash has been milder, partly because developers volunteered for it. The broader shift is worth naming. Agents are moving from chat windows to ambient observers, watching what you do so they can do more of it for you. Useful when it works. A subpoena magnet when it does not. Enterprise legal teams have not caught up to what a log of every screen on every laptop looks like in discovery.

IndustryEvals ·Above the Law

Courts stop laughing

AnalysisOn April 21, Sullivan & Cromwell, one of the oldest white-shoe law firms in the country, filed an emergency letter begging a judge not to sanction it for AI-hallucinated citations in a recent brief. Hallucination here means the AI tool invented case law that does not exist. The filing joins a rising pile. US courts imposed at least $145,000 in sanctions on lawyers for AI fabrications in the first quarter of 2026, including a $96,000 personal fine against a San Diego attorney in early April, the largest such penalty recorded. Judges have moved from embarrassment to misconduct in under a year. The quiet shift: a major firm now sees public filing of an emergency "please don't sanction us" letter as the safer play than quietly editing and hoping. That is a compliance regime forming in real time, one fine at a time.

IndustryModels ·Bloomberg

Bezos is buying a physics simulator, basically

AnalysisJeff Bezos is close to finalizing a $10 billion round for Project Prometheus, his stealth AI lab, at a $38 billion valuation. JPMorgan and BlackRock are in. The pitch is physical-world AI: models that understand manufacturing, aerospace, robotics, and drug discovery rather than generating another chatbot. That puts Prometheus on a collision course with Nvidia's Cosmos (a physics simulator for training robots), Google DeepMind's robotics push, and the dozens of Chinese humanoid startups that shipped more units than US rivals in Q1. A $38 billion valuation before a single public product is also a tell. Capital is crowding into the next frontier because the language-model frontier is starting to look like a cost war. If the next decade of AI is about physical work, the companies that win it will need both a foundation model and a factory. Bezos owns pieces of both.

Industry ·CNBC

China is quietly winning the robot race

AnalysisChina shipped more humanoid robots than the US in Q1 2026, according to a CNBC brief published April 21. There are over 100 humanoid startups in China, and AI2 Robotics alone just hit a $2.93 billion valuation with backing from Chinese, Singaporean, and Middle Eastern investors. No US money. Chinese firms have something US rivals lack: a domestic manufacturing base that can produce thousands of robot bodies at industrial prices, and policy tailwinds that treat humanoids as a strategic sector the way semiconductors were treated a decade ago. On April 19 twenty Chinese humanoids ran a half-marathon in Beijing as a showcase. Meanwhile US humanoid programs remain mostly demo videos and series B pitches. The gap is less about model capability than about who can actually build and deploy at scale. That asymmetry is getting larger, not smaller.

IndustryAgents ·MIT Technology Review

The gig economy now includes teaching a robot to microwave lunch

AnalysisMIT Technology Review reported on April 21 that a new class of gig apps pays people in cryptocurrency to film themselves doing household tasks (putting food in a bowl, microwaving it, taking it out) to generate training data for humanoid robots. Robotics requires orders of magnitude more physical-world data than chatbots needed text, and no scraped corpus exists. So companies are paying gig workers a few dollars per clip. The analogy writes itself. Mechanical Turk labeled the images that trained today's language models. This is Mechanical Turk with a kitchen. Workers generating the data that will automate their own eventual jobs is a pattern with precedent, but the humanoid case is more direct than most. Each clip of you loading a dishwasher trains the thing that may load the next one.

Industry ·HR Executive

UKG actually says the quiet part

AnalysisOn April 15, UKG (the HR software giant formed from the 2020 merger of Ultimate Software and Kronos, used to process payroll for tens of millions of US workers) laid off 950 people, about 6 percent of staff. The memo from UKG's senior director of global public relations cited "changes in technology driven by AI" as the reason. Most companies doing the same thing dress it up as "optimizing for efficiency" or "AI-native transformation." UKG wrote it plainly, which is unusual enough to be notable. The deeper irony: UKG sells software that helps other employers manage their workforces, now including tools for AI-driven scheduling and performance tracking. The company is both the supplier and the case study. Of the nearly 80,000 tech jobs cut in Q1, trackers attribute roughly half to AI. UKG's memo is what the other half sound like before the PR edit.

ModelsIndustry ·NVIDIA

Nvidia plants a flag in quantum before it is real

AnalysisOn April 14 Nvidia released Ising, the first family of open AI models built specifically to run quantum computers. The models handle calibration (tuning fragile quantum hardware) and error correction (the bigger problem, because quantum bits lose their state constantly and need software to keep them honest). Quantum stocks rallied all week. The models are free. That last part is the move. Nvidia's strategy across AI, robotics, and now quantum is to give away the software layer, make it the default everyone trains on, and keep selling the chips underneath. Ising means any quantum startup working on hardware now has a reason to optimize for Nvidia's tooling before their machines exist. Classic platform play, running a decade early. Useful quantum computing is still years out. Nvidia wants to own the on-ramp before the road opens.

Subscribe for full archive access

Every past issue, weekly deep dives, and the full back catalogue — delivered free.

Read on Substack

Want this in your inbox?

One email a day, zero hype.

A short read every morning: what actually changed in AI, and what it means for work and daily life. Free, unsubscribe anytime.