Thesis
NVIDIA maintains a dominant position in the AI training and inference accelerator market, with an estimated 80%+ share of data center GPU revenue. The company's CUDA ecosystem creates significant switching costs.
Key Drivers
- Blackwell ramp: B200/GB200 shipments accelerating through 2026
- HBM supply: SK Hynix and Samsung capacity remains the binding constraint
- Custom silicon risk: Google TPU, Amazon Trainium, and Microsoft Maia represent long-term competitive pressure
- Networking: Spectrum-X and NVLink driving incremental ASP
Valuation Framework
Using a DCF with terminal growth of 4%, WACC of 10%, and revenue CAGR scenarios of 25-40% through FY2028, we derive a fair value range of $780-$1,050.
Risk Factors
- Export controls tightening (China revenue ~15%)
- Customer concentration (top 4 hyperscalers = ~50% of DC revenue)
- Gross margin pressure as custom silicon gains share