News In Brief Technology and Gadgets
News In Brief Technology and Gadgets

Google DeepMind CEO Demis Hassabis Flags AI Memory Shortage Risk

Share Us

56
Google DeepMind CEO Demis Hassabis Flags AI Memory Shortage Risk
21 Feb 2026
min read

News Synopsis

Google’s latest release, Gemini 3.1 Pro, represents a major step forward in generative AI performance. The model has demonstrated superior results in certain benchmarks compared to Claude Opus 4.6 from Anthropic, highlighting the intensifying competition among leading AI labs.

As AI firms push toward increasingly larger and more capable models, computational demands have surged dramatically. Training advanced AI systems now requires thousands of high-performance GPUs and massive data center capacity.

The Growing Memory Chip Shortage

Rising Demand, Limited Supply

In recent weeks, discussions about a memory chip shortage have intensified. AI data centers require vast quantities of GPUs and high-bandwidth memory to train and deploy models effectively. This surge in demand has tightened global supply chains, contributing to higher prices for various electronic devices, including smartphones.

Google DeepMind chief Demis Hassabis expressed concern that this shortage could significantly hinder AI innovation.

“You need a lot of chips to be able to experiment on new ideas at a big enough scale that you can actually see if they're going to work.”

He described this situation as a potential “choke point,” suggesting that insufficient access to memory chips may restrict experimentation and slow technological breakthroughs.

Industry-Wide Pressure on Chip Supply

AI Companies Competing for Resources

The race to build larger and more capable AI systems has intensified demand for advanced hardware. Previously, Mark Zuckerberg, CEO of Meta, noted that AI researchers want "the most chips possible."

This industry-wide competition for GPUs and memory chips is creating structural supply challenges. As companies strive to expand training clusters and inference capacity, hardware bottlenecks are emerging as a limiting factor.

Even Google Is Not Immune

Proprietary TPUs Offer Partial Relief

Google designs and manufactures its own Tensor Processing Units (TPUs), giving it some insulation from reliance on third-party suppliers like Nvidia. However, this vertical integration does not fully eliminate supply vulnerabilities.

Hassabis acknowledged:

“It still, in the end, actually comes down to a few suppliers of a few key components.”

Despite in-house chip design capabilities, Google remains dependent on specialized memory manufacturers for critical hardware components.

Demand Outpacing Supply

Demis Hassabis even stated that Google has been constrained to the point where it could not meet demand for its Gemini models. This underscores the scale of global AI adoption and the strain placed on hardware infrastructure.

In response to the memory crunch, companies including Google and Microsoft have reportedly dispatched executives to South Korea to secure additional supply agreements.

The Key Players in Memory Chip Production

The global memory chip market is dominated by three major manufacturers:

  • Samsung

  • Micron

  • SK Hynix

These firms produce high-bandwidth memory (HBM) and DRAM chips essential for AI workloads.

Micron has announced plans to phase out certain chip production lines for personal electronics to prioritize AI-focused chips. This shift highlights how AI demand is reshaping the semiconductor industry.

Massive Capital Investment in AI Infrastructure

Google’s $175–$185 Billion Spending Plan

Industry forecasts suggest that chip supply constraints may persist for the foreseeable future. To prepare for sustained AI growth, Google recently disclosed plans for substantial capital expenditures on AI infrastructure.

The company projects spending between $175 billion and $185 billion for 2026 to expand its data center capacity, enhance computing infrastructure, and secure hardware resources.

This scale of investment underscores how critical infrastructure has become to maintaining leadership in AI development.

Why This ‘Choke Point’ Matters

Innovation Depends on Experimentation

AI breakthroughs often depend on large-scale experimentation. Researchers require extensive computational resources to test new architectures, refine models, and push performance boundaries.

If access to memory chips becomes constrained, innovation cycles could slow significantly. The potential bottleneck highlights a key paradox in AI’s growth trajectory: software advancements are accelerating, but hardware supply may struggle to keep pace.

Conclusion

While AI capabilities in 2026 have reached new heights—with models like Gemini 3.1 Pro surpassing competitors in certain benchmarks—the industry now faces a pressing infrastructure challenge.

Demis Hassabis’s warning about a potential “choke point” underscores the growing dependence of AI progress on memory chip supply. Despite proprietary chip development and massive capital investment, even tech giants like Google remain exposed to supply chain constraints.

As competition intensifies and demand for AI systems expands across industries, resolving hardware bottlenecks may prove just as critical as advancing algorithms. The next phase of AI innovation will depend not only on smarter models but also on the global semiconductor ecosystem’s ability to scale.