Editor’s note:What the AI demand data says that the market doesn’t“It was previously published in March 2026 with the title “$1 Trillion in Demand for AI, and the Market is Looking the Other Way.” It has since been updated to include the most relevant information available.
Something doesn’t add up.
AI shares Emerging markets have been volatile – influenced by macroeconomic headlines, geopolitical tensions, and changing risk sentiment.
But underneath that, something completely different is going on.
The companies fueling the AI boom have posted some of the strongest numbers — and issued some of the boldest future guidance — we’ve seen so far this cycle.
Historically, these types of gaps between price action and underlying fundamentals do not last long.
Because they can’t both be right.
What will remain when the smoke clears?
A group of AI tailwinds that are still intact, are still accelerating – and are now trading at a discount after a fear-driven correction.
AI infrastructure data tells a different story
So let’s talk about those basics. Because I have five key texts from the last few weeks in front of me – Broadcom (Afgo), Marvel (MRVL), oracle (ORCL), Micron (in), and Nvidia (NVDA) CEO Jensen Huang’s keynote at GTC – and they all point to the same conclusion: The hypercyclicity of AI infrastructure only exacerbates it.
We will start with reviews of future directions for companies.
Expectations are moving upward quickly
Back in September 2025, Marvell told investors that fiscal 2027 revenue would be about $9.5 billion. By December, it had been revised upward to $10 billion. Last week, it hit 11 billion dollars – With fiscal year 2028 now targeted at $15 billion. This is it 30% – plus upward review to future revenue projections, all in six months. Marvell’s expected growth rate for 2027 is approx double What he told The Street at Investor Day in September.
This kind of review in six months would be a headline in any other environment.
Here, it’s part of a broader pattern across the stack.
Across the AI supply chain, companies are not only reporting strong demand, they are adjusting forecasts higher as that demand emerges faster than planned.
The scope is expanding across the AI infrastructure stack
Broadcom’s latest results reflect the same shift, but on a different scale. The company posted $8.4 billion in AI semiconductor revenue in one quarter, up 106% year-over-year, and is on track to reach $10.7 billion next quarter — meaning 140% growth.
Then CEO Hock Tan added a long-term data point that’s hard to ignore: Broadcom now has a vision of more than $100 billion in AI chip revenue by 2027. Not total revenue. Just chips.
If Broadcom highlights the scale of what is being built, Oracle offers a look at the extent to which customers are already committed.
Oracle’s remaining performance obligation (RPO) – essentially a site backlog (contracted order that still needs to be delivered) – now stands at $553 billion. AI infrastructure revenue grew 243% year-over-year, while MultiCloud database revenue grew 531%.
The limitations on supply are already showing
In some parts of the group, demand is already facing supply constraints.
Micron announced the largest sequential revenue increase in the company’s history and forecast that next quarter’s revenue will exceed the company’s total annual revenue for every year through fiscal 2024 — with gross margins rising from 75% to 81% in one quarter.
These margins reflect how tight the supply is.
Step back, and all of these data points start to line up with what Nvidia sees at the system level.
At a GTC event in March in San Jose, Jensen Huang said: A year ago, he saw high-confidence demand of $500 billion through 2026. Today, he sees At least $1 trillion until 2027. Then, to make sure no one was uncomfortable, he added: “We’ll be short.”
Why is the demand for AI multiplying at an exponential rate?
On an individual level, these numbers are impressive. Together they describe a demand curve that has begun to curve upward.
Jensen Huang explained what is driving this shift at GTC.
In the last two years, the demand for computing has increased by approx 1 million times. This is the product of two separate multipliers:
- First, the computation required per reasoning session has increased by approximately 10,000-fold as AI has evolved from simpler chatbots to reasoning models (o1, o3) and then increasingly to agent systems.
- Second, usage itself has increased by almost 100-fold.
Multiply these drivers, and you get a million-fold increase in demand.
The shift from training to inference is driving demand for AI infrastructure
AI no longer just responds. It works. The crucial development highlighted by Jensen at GTC is Inference inflection. During the first two years of the generative AI era, most of the demand for computing was training. Now, with inference models that think before responding — and agent systems like Cloudcode that can autonomously read files, write code, test, and iterate — inference has become the dominant and rapidly growing workload.
Every action requires codes. All code requires inference, and all inference requires computation, memory, bandwidth, and power. The demand driver has essentially shifted from a one-time training cost to a permanent inference tax on every activity the AI performs.
This is structural change. This explains why every company in this group is not only growing, but growing faster than they were six months ago.
AI bottlenecks are changing, and so are the opportunities
When demand starts getting worse like this, something has to go down.
In AI infrastructure, this “something” appears in the form of bottlenecks – not staying in one place for long.
GPUs and other accelerators were the first barrier, and this part of the market is now in a stage of sustained hypergrowth.
From account to communication
From there the pressure moved on connects – The systems that link all those calculations together.
Marvell’s results illustrate this shift. Its interconnection business, which was previously expected to grow in line with overall capital spending, is now growing at more than 50% – much closer to the pace of the accelerators themselves.
Now the bottleneck has moved again.
The bottleneck in AI infrastructure has shifted to memory
memory It is the current limitation, and Micron’s numbers show how tight things are.
The company is only able to meet approximately 50% to 66% of customer demand, as AI workloads and traditional server demand compete for the limited supply of DRAM and NAND memory.
This imbalance will not be resolved any time soon.
High-bandwidth memory (HBM4) is just starting to ship, the next generation (HBM4E) won’t launch until 2027, and building new manufacturing capacity takes years.
Meanwhile, pricing power is adjusting.
Micron’s gross margins jumped from 75% to 81% in one quarter – an unusually sharp move that reflects how constrained supply is relative to demand. Its chief financial officer, Mark Murphy, was clear: This is not a cycle. Memory has been recast as a defining strategic asset in the age of artificial intelligence.
Early order is secured
With supply tight, customers can’t wait.
They commit early – and broadly – to securing what they will need.
We can see this shift clearly in the Oracle numbers. The $553 billion RPO number may be the only underappreciated number in technology right now.
Three years ago, Oracle was one of the legacy database vendors struggling to fit. Today it is the infrastructure of choice for large-scale AI training and inference workloads. Nvidia confirmed this at GTC, citing Oracle as its first AI customer and citing Cohere, Core, Fireworks, and OpenAI as tenants. Oracle’s “bring your own device” model — $29 billion in new contracts since its last earnings call — allows it to grow without any similar drag from free cash flow.
Demand is accelerating. Bottlenecks are changing. Capacity is secured.
Now the building itself began to change.
The rise of custom silicon in AI infrastructure
Both Broadcom and Marvell are seeing the same transformation from different angles: hyperscalers are increasingly building their own custom AI chips.
Broadcom is directly exposed to this trend.
The company is serving now six XPU Clients: alphabet (Google), Anthropic, dead (dead), ByteDance, Fujitsuand OpenAI. Importantly, these are multi-year partnerships tied to each company’s long-term AI roadmap.
OpenAI alone has signed a 10 GW agreement through 2029 and plans to deploy more than 1 GW of first-generation XPUs in 2027.
The reason for this shift is clear and straightforward.
As AI models become more specialized—whether for reasoning, heuristics, or sparse architectures—general-purpose GPUs cannot always deliver the same efficiency as chips designed for specific workloads.
This is where Broadcom has an advantage. Its decades of experience in custom silicon design, combined with advanced packaging and manufacturing scale, make it one of the few companies able to deliver these chips in large quantities.
Marvel sits in a different position, but benefits from the same trend.
Every XPU deployed still needs networking, memory expansion, and high-speed connectivity. The Marvell family – network interface cards (NICs), CXL-based memory expansion, and switching – supports that layer of architecture.
As more custom chips are deployed, the “connected” market grows along with them.
Marvell expects this part of its business to reach nearly $1 billion by fiscal 2027, with a path to more than $2 billion by 2029 in networking and memory-related products alone.
You don’t need to design the winning slide.
It provides the infrastructure that connects and supports them all.
Short-term noise versus long-term AI demand
None of the demand trends we just experienced were driven by geopolitics.
They continued to build in the background.
What the US-Iran conflict has done is create a layer of macro uncertainty – pushing up energy prices, tightening financial conditions, and spurring a broad risk-off movement in stocks.
The main question is how sustainable this accumulation is.
For now, rhetoric remains high, and negotiations have been uneven. But the underlying incentives on both sides point in a different direction.
Sustained escalation carries significant economic costs – through energy markets, trade flows, and domestic financial conditions – that neither side can absorb for long.
This does not guarantee a clean or immediate solution. But it suggests that the current level of the geopolitical risk premium is likely to stabilize or gradually fade as those pressures increase.
When that happens, the market’s focus will shift back to underlying fundamentals.
In this case, those fundamentals continued to strengthen while attention was focused elsewhere.
Stocks that get hit hard by risk-off moves in a sector with sound fundamentals are usually the same stocks that recover fastest and furthest when the risk-off trigger hits.
What happens when the price hits the data?
Jensen Huang now forecasts at least $1 trillion in demand for AI infrastructure through 2027 — and he expects supply to decline.
Broadcom is expanding custom silicon software associated with multi-gigawatt deployments.
Oracle has already secured hundreds of billions of dollars in future demand.
Micron operates in one of the tightest supply environments in its history.
The data is already on the table.
The construction of AI infrastructure continues to accelerate. As more computing comes online, the companies that pivot on top of it – and turn it into products, platforms and recurring revenue – will begin to capture a greater share of the upside.
This layer is starting to appear.




