Investors fearing that the trillion-dollar artificial intelligence buildout echoes the turn-of-the-century dot-com crash are looking at the wrong metrics. Gavin Baker, managing partner at Atreides Management, argued that unlike the telecommunications glut of 2000, the current demand for AI compute is tangible, immediate, and arguably under-supplied.
Speaking with Andreessen Horowitz general partner David George, Baker drew a sharp contrast between today’s data center expansion and the “dark fiber” phenomenon that defined the earlier crash. In the late 1990s, telecommunications companies laid thousands of miles of fiber-optic cable that never saw a single photon of data. At the peak of that bubble, 97% of the fiber buried in the ground remained unlit and useless.
Today’s infrastructure faces the opposite problem.
“There are no dark GPUs.”
— Gavin Baker, Managing Partner at Atreides Management
The demand for processors is so intense that the primary bottleneck is technical failure, not a lack of customers. Baker noted that one of the biggest issues in current training runs is simply that graphics processing units are melting under the workload.
Scale of Investment
The scale of investment is admittedly historic. George pointed out that the U.S. has built more data center capacity in the last three years—measured in inflation-adjusted dollars—than the entire interstate highway system, which took four decades to complete. Yet, despite this capital intensity, the return on invested capital for the largest tech companies has actually increased by roughly 10 points during this spending spree.
3 years
to build more data center capacity than four decades of interstate highways
Market observers have raised concerns about “round-tripping,” where tech giants invest in AI startups that immediately use that capital to buy cloud services back from the investor. Baker dismissed this as a side show. The real driver of the spending is a brutal competitive dynamic, particularly between Nvidia and Google.
The Nvidia-Google Proxy War
Google represents the only true alternative to Nvidia’s dominance today. Its custom Tensor Processing Units (TPUs) allow the search giant to train frontier models without relying entirely on external hardware. This forces Nvidia to act strategically, supporting competing labs like OpenAI to ensure Google does not monopolize the field.
A proxy war fought with silicon and venture capital.
This competition will force a restructuring of business models across the software industry. The era of 90% gross margins for software-as-a-service (SaaS) companies may be ending. Baker suggested that AI applications require massive, ongoing compute power to function, which will naturally compress margins.
Context
Investors should not view margin compression as a failure. Baker compared the shift to the transition from on-premise software to the cloud: margins dropped, but the utility and scale of the products increased. If an AI product maintains high legacy margins, it likely isn’t doing the heavy lifting for the customer. Real utility costs money to run.
The Application Layer Under Siege
The application layer faces its own “Pearl Harbor” moment. Baker described the launch of ChatGPT as a surprise attack on Google’s core search monopoly. While incumbents have the distribution advantages, the shift toward “agentic” workflows—where an AI books a flight rather than just displaying a link—threatens the traditional advertising model.
The industry now sits in a paradox. The infrastructure buildout is rational and profitable, yet the downstream business models are in flux. But for now, the hardware is not sitting idle in the dark.
