Jim Chanos, a prominent short seller, is doubling down on his bet against legacy and modern data centers, arguing that their low margins, high capital costs, and rapid GPU depreciation make them poor investments, especially as profits from AI are expected to flow from chip production rather than data center infrastructure. He warns of a potential market contraction similar to the dot-com bust and highlights risks such as unprofitable AI companies and high debt levels among hyperscalers.
The article discusses potential threats to Nvidia's dominance in AI hardware, highlighting how major companies like Alphabet, Amazon, and Microsoft are developing their own AI chips and infrastructure, which could reduce Nvidia's market share in the future.
Michael Burry accuses major tech companies and hyperscalers of artificially inflating earnings by underestimating depreciation expenses related to AI infrastructure, potentially overstating profits significantly from 2026 to 2028, amid ongoing concerns about accounting practices in the tech industry.
Oracle's stock surged 41% after strong Q1 results and optimistic cloud growth forecasts, positioning it as a rising competitor among hyperscalers like Amazon, Microsoft, and Google, driven by demand for AI infrastructure and a large backlog of contracted revenue, though concerns about funding and sustainability remain.
Increased capital expenditures on data centers by major hyperscale cloud companies signal strong growth prospects for Nvidia and other tech suppliers, with analysts expecting Nvidia to report a positive second-quarter earnings and maintain a buy rating, supported by rising AI-related infrastructure spending.
Microsoft is developing a new AI chip, Athena, designed to handle large language model training, which could be available for use within the company and OpenAI as early as next year. Experts say Nvidia won't be threatened by this move, but it does signal the need for hyperscalers to develop their own custom silicon. The need for acceleration also applies to AI chips that support machine learning inference, and the hyperscalers see a way to impact the inference needs of their customers with customized silicon.