For three years, Apple looked like it was losing the AI race. The company that pioneered Siri watched ChatGPT displace voice assistants overnight. It built no flagship frontier model, made no $500 billion compute commitment, and stayed on the sidelines while competitors burned capital at rates that would unsettle sovereign wealth funds.
But according to analysis from Álvaro del Rocha, that apparent loss may have positioned Apple for an unexpected win—not because Apple's strategy was prescient, but because the entire premise of the AI race has inverted.
The commoditization of intelligence is arriving faster than the leading labs anticipated. Gemma4, Google's open-weight model designed to run on phones, scores 85.2 percent on MMLU Pro and matches Claude Sonnet 4.5 Thinking on the Arena leaderboard. It logged 2 million downloads in its first week. Models that would have been state-of-the-art eighteen months ago now execute on laptops. The distance between frontier, second-best, and open-source alternatives is collapsing.
Meanwhile, the companies that bet everything on owning intelligence and infrastructure are facing severe pressure. OpenAI raised at a $300 billion valuation, then shut down Sora—the video product it had positioned as a creative industry flagship—because the product was running at roughly $15 million per day in costs against $2.1 million in daily revenue. Disney had signed a three-year licensing deal for Sora content and was finalizing a $1 billion equity stake. When Sora died, so did the billion-dollar investment.
On infrastructure, OpenAI signed non-binding letters of intent with Samsung and SK Hynix for up to 900,000 DRAM wafers per month—roughly 40 percent of global output. Micron, reading the demand signal, dismantled its 29-year-old Crucial consumer memory brand to redirect all capacity toward AI customers. Then Stargate Texas was cancelled and OpenAI and Oracle could not agree on terms. The demand that had justified Micron's entire strategic pivot vanished. Micron's stock crashed.
As raw model capability becomes abundant, companies that locked themselves into massive fixed costs face a different problem: the models alone cannot hold a moat. Anthropic recognized this early. The company is aggressively releasing tools designed to capture users at the workflow level—Claude Code for developers, Claude Cowork for teams, Claude Managed Sessions for agent orchestration. The strategy is clear: if the model will not hold the moat, lock users in at the usage layer and make switching painful.
But this approach still demands heavy subsidy. One analysis found a max-plan subscriber consuming $27,000 worth of compute against a $200 monthly subscription. The labs are subsidizing the demand they pursue, justifying their burn rate—for now.
Apple, by contrast, has spent almost nothing on AI infrastructure and has not subsidized user token burn. The company has accumulated undeployed cash to the point of increasing stock buybacks. While competitors locked themselves into billion-dollar monthly burn rates, Apple maintained optionality.
In a race where intelligence is becoming a commodity and context is becoming scarce, the company that spent the least on the wrong bets may have maneuvered itself into the strongest position. Not because Apple won the AI race. But because the race itself may be ending.