← Back to Research

NVDA: AI Infrastructure Dominance and the Compute Bottleneck

Feb 15, 2026|Quant Research|Scenario Analysis
NVDAAISemiconductorsOptions

NVIDIA's data center segment continues to benefit from insatiable AI compute demand. We model three scenarios around supply chain constraints, competitive dynamics from AMD/custom silicon, and the HBM supply bottleneck.

Scenario Analysis

Trigger: Data Center revenue growth QoQ
buy

$1,050.00

Blackwell demand exceeds supply through 2026; GM stays above 72%

Probability: 35%

holdCURRENT

$875.00

Steady state: strong demand but competitive pressure begins to weigh

Probability: 45%

sell

$680.00

Export ban expansion + custom silicon adoption accelerates + HBM oversupply

Probability: 20%

Last earnings: 2026-02-26

Thesis

NVIDIA maintains a dominant position in the AI training and inference accelerator market, with an estimated 80%+ share of data center GPU revenue. The company's CUDA ecosystem creates significant switching costs.

Key Drivers

  1. Blackwell ramp: B200/GB200 shipments accelerating through 2026
  2. HBM supply: SK Hynix and Samsung capacity remains the binding constraint
  3. Custom silicon risk: Google TPU, Amazon Trainium, and Microsoft Maia represent long-term competitive pressure
  4. Networking: Spectrum-X and NVLink driving incremental ASP

Valuation Framework

Using a DCF with terminal growth of 4%, WACC of 10%, and revenue CAGR scenarios of 25-40% through FY2028, we derive a fair value range of $780-$1,050.

Risk Factors

  • Export controls tightening (China revenue ~15%)
  • Customer concentration (top 4 hyperscalers = ~50% of DC revenue)
  • Gross margin pressure as custom silicon gains share