What the AI ​​demand data says that the market doesn’t


Editor’s note:What the AI ​​demand data says that the market doesn’t“It was previously published in March 2026 with the title “$1 Trillion in Demand for AI, and the Market is Looking the Other Way.” It has since been updated to include the most relevant information available.

Something doesn’t add up.

AI shares Emerging markets have been volatile – influenced by macroeconomic headlines, geopolitical tensions, and changing risk sentiment.

But underneath that, something completely different is going on.

The companies fueling the AI ​​boom have posted some of the strongest numbers — and issued some of the boldest future guidance — we’ve seen so far this cycle.

Historically, these types of gaps between price action and underlying fundamentals do not last long.

Because they can’t both be right.

What will remain when the smoke clears?

A group of AI tailwinds that are still intact, are still accelerating – and are now trading at a discount after a fear-driven correction.

AI infrastructure data tells a different story

So let’s talk about those basics. Because I have five key texts from the last few weeks in front of me – Broadcom (Afgo), Marvel (MRVL), oracle (ORCL), Micron (in), and Nvidia (NVDA) CEO Jensen Huang’s keynote at GTC – and they all point to the same conclusion: The hypercyclicity of AI infrastructure only exacerbates it.

We will start with reviews of future directions for companies.

Expectations are moving upward quickly

Back in September 2025, Marvell told investors that fiscal 2027 revenue would be about $9.5 billion. By December, it had been revised upward to $10 billion. Last week, it hit 11 billion dollars – With fiscal year 2028 now targeted at $15 billion. This is it 30% – plus upward review to future revenue projections, all in six months. Marvell’s expected growth rate for 2027 is approx double What he told The Street at Investor Day in September.

This kind of review in six months would be a headline in any other environment.

Here, it’s part of a broader pattern across the stack.

Across the AI ​​supply chain, companies are not only reporting strong demand, they are adjusting forecasts higher as that demand emerges faster than planned.

The scope is expanding across the AI ​​infrastructure stack

Broadcom’s latest results reflect the same shift, but on a different scale. The company posted $8.4 billion in AI semiconductor revenue in one quarter, up 106% year-over-year, and is on track to reach $10.7 billion next quarter — meaning 140% growth.

Then CEO Hock Tan added a long-term data point that’s hard to ignore: Broadcom now has a vision of more than $100 billion in AI chip revenue by 2027. Not total revenue. Just chips.

If Broadcom highlights the scale of what is being built, Oracle offers a look at the extent to which customers are already committed.

Oracle’s remaining performance obligation (RPO) – essentially a site backlog (contracted order that still needs to be delivered) – now stands at $553 billion. AI infrastructure revenue grew 243% year-over-year, while MultiCloud database revenue grew 531%.

The limitations on supply are already showing

In some parts of the group, demand is already facing supply constraints.

Micron announced the largest sequential revenue increase in the company’s history and forecast that next quarter’s revenue will exceed the company’s total annual revenue for every year through fiscal 2024 — with gross margins rising from 75% to 81% in one quarter.

These margins reflect how tight the supply is.

Step back, and all of these data points start to line up with what Nvidia sees at the system level.

At a GTC event in March in San Jose, Jensen Huang said: A year ago, he saw high-confidence demand of $500 billion through 2026. Today, he sees At least $1 trillion until 2027. Then, to make sure no one was uncomfortable, he added: “We’ll be short.”

Why is the demand for AI multiplying at an exponential rate?

On an individual level, these numbers are impressive. Together they describe a demand curve that has begun to curve upward.

Jensen Huang explained what is driving this shift at GTC.

In the last two years, the demand for computing has increased by approx 1 million times. This is the product of two separate multipliers:

  • First, the computation required per reasoning session has increased by approximately 10,000-fold as AI has evolved from simpler chatbots to reasoning models (o1, o3) and then increasingly to agent systems.
  • Second, usage itself has increased by almost 100-fold.

Multiply these drivers, and you get a million-fold increase in demand.

The shift from training to inference is driving demand for AI infrastructure

AI no longer just responds. It works. The crucial development highlighted by Jensen at GTC is Inference inflection. During the first two years of the generative AI era, most of the demand for computing was training. Now, with inference models that think before responding — and agent systems like Cloudcode that can autonomously read files, write code, test, and iterate — inference has become the dominant and rapidly growing workload.

Every action requires codes. All code requires inference, and all inference requires computation, memory, bandwidth, and power. The demand driver has essentially shifted from a one-time training cost to a permanent inference tax on every activity the AI ​​performs.

This is structural change. This explains why every company in this group is not only growing, but growing faster than they were six months ago.

AI bottlenecks are changing, and so are the opportunities

When demand starts getting worse like this, something has to go down.

In AI infrastructure, this “something” appears in the form of bottlenecks – not staying in one place for long.

GPUs and other accelerators were the first barrier, and this part of the market is now in a stage of sustained hypergrowth.

From account to communication

From there the pressure moved on connects – The systems that link all those calculations together.

Marvell’s results illustrate this shift. Its interconnection business, which was previously expected to grow in line with overall capital spending, is now growing at more than 50% – much closer to the pace of the accelerators themselves.

Now the bottleneck has moved again.

The bottleneck in AI infrastructure has shifted to memory

memory It is the current limitation, and Micron’s numbers show how tight things are.

The company is only able to meet approximately 50% to 66% of customer demand, as AI workloads and traditional server demand compete for the limited supply of DRAM and NAND memory.

This imbalance will not be resolved any time soon.

High-bandwidth memory (HBM4) is just starting to ship, the next generation (HBM4E) won’t launch until 2027, and building new manufacturing capacity takes years.

Meanwhile, pricing power is adjusting.

Micron’s gross margins jumped from 75% to 81% in one quarter – an unusually sharp move that reflects how constrained supply is relative to demand. Its chief financial officer, Mark Murphy, was clear: This is not a cycle. Memory has been recast as a defining strategic asset in the age of artificial intelligence.

Early order is secured

With supply tight, customers can’t wait.

They commit early – and broadly – ​​to securing what they will need.

We can see this shift clearly in the Oracle numbers. The $553 billion RPO number may be the only underappreciated number in technology right now.

Three years ago, Oracle was one of the legacy database vendors struggling to fit. Today it is the infrastructure of choice for large-scale AI training and inference workloads. Nvidia confirmed this at GTC, citing Oracle as its first AI customer and citing Cohere, Core, Fireworks, and OpenAI as tenants. Oracle’s “bring your own device” model — $29 billion in new contracts since its last earnings call — allows it to grow without any similar drag from free cash flow.

Demand is accelerating. Bottlenecks are changing. Capacity is secured.

Now the building itself began to change.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *