← Companies

NVDA

NVIDIA Corporation
Information TechnologySemiconductorsHigh Conviction

Market Cap

$4.3T

Dominant AI infrastructure platform. Designs the Vera Rubin system (7 chips: Rubin GPU + Vera CPU + NVLink 6 + ConnectX-...

Key Conclusion

NVIDIA has evolved from a dominant AI GPU supplier into the sole vertically integrated AI infrastructure platform. GTC 2026 revealed Vera Rubin — a 7-chip system (Rubin GPU + Vera CPU + NVLink 6 + Groq 3 LPU + ConnectX-9 + BlueField-4 + Spectrum-6) in full production, shipping H2 2026. The $1T order book through 2027 (doubled from $500B at GTC 2025) validates multi-year demand. Samsung 4nm fabrication of Groq 3 LPU partially diversifies TSMC sole-source risk. The moat has shifted from 'best GPU + CUDA' to 'only company shipping a complete AI factory in a box' — a systems-level advantage AMD cannot replicate.

Investment Thesis

Core Thesis

BASE scenario

NVIDIA has evolved from a dominant AI GPU supplier into the sole vertically integrated AI infrastructure platform. GTC 2026 revealed Vera Rubin — a 7-chip system (Rubin GPU + Vera CPU + NVLink 6 + Groq 3 LPU + ConnectX-9 + BlueField-4 + Spectrum-6) in full production, shipping H2 2026. The $1T order book through 2027 (doubled from $500B at GTC 2025) validates multi-year demand. Samsung 4nm fabrication of Groq 3 LPU partially diversifies TSMC sole-source risk. The moat has shifted from 'best GPU + CUDA' to 'only company shipping a complete AI factory in a box' — a systems-level advantage AMD cannot replicate.

Moat Analysis

network-effect

strong

CUDA + NemoClaw/OpenClaw agentic AI ecosystem. 4M+ developers, 800+ AI libraries, now extended by Nemotron Coalition (Mistral, Perplexity, Cursor, LangChain, Black Forest Labs — 'billions' committed at GTC 2026). Trajectory: expanding at the agent layer — GTC 2026 positioned NVIDIA as the agentic AI platform, not just compute. Enterprise CUDA lock-in stable; hyperscalers still diversifying via Triton, ROCm 7.2, and custom ASICs. AMD-OpenAI (6GW) and AMD-Meta (6GW) validate switching path exists at top tier, but the NemoClaw/OpenClaw agent framework adds a new software lock-in layer above CUDA.

GTC 2026 investor-pres

switching-cost

strong

Enterprise AI deployments are built on CUDA; migration requires rewriting inference pipelines and requalifying performance. Trajectory: stable — switching costs remain massive for enterprises and tier-2 clouds, but the top 5 customers (48% of DC revenue) are actively reducing CUDA dependency via custom ASICs (Google TPU at 78% of internal AI servers) and AMD partnerships (OpenAI). The market is bifurcating: unbreakable switching costs for 80% of customers, weakening for 48% of revenue.

2026 report

intangible-assets

strong

NVLink 6 interconnect IP at 3.6 TB/s per GPU / 260 TB/s per rack — a generational leap that competitors cannot match. Vera Rubin NVL72 rack is shipping H2 2026; Feynman generation previewed with Kyber co-packaged optics interconnect. Trajectory: compounding — networking revenue hit $8.2B/quarter (+162% YoY) and NVLink 6 extends the lead further. AMD Helios racks (MI450) are 2+ generations behind on interconnect bandwidth. Additionally, Spectrum-X Photonics and Quantum-X Photonics (co-packaged optics: 3.5x lower power, 63x better signal integrity) create a silicon photonics moat for the scale-out layer.

GTC 2026 investor-pres

cost-advantage

strong

Systems-level scale economics: 7-chip platform (Rubin GPU + Vera CPU + NVLink 6 + Groq 3 LPU + ConnectX-9 + BlueField-4 + Spectrum-6) amortizes $18.5B R&D across $215.9B revenue. No competitor can match the breadth: AMD has GPU + CPU but no LPU, no DPU, no switch ASIC, no photonics. The Groq acquisition ($20B) added dedicated inference silicon that creates a new product category. $1T order book through 2027 provides unmatched volume to amortize next-gen R&D (Feynman on TSMC A16). Samsung 4nm for Groq 3 diversifies manufacturing, reducing TSMC concentration risk.

GTC 2026 investor-pres

Competitive Position

Market share

85%

TAM

$400,000M

SAM %

65%

Share trend

stable

Advantages

  • - Only vertically integrated AI infrastructure platform (7-chip system: GPU + CPU + LPU + DPU + NIC + switches)
  • - NVLink 6 at 260 TB/s rack bandwidth — 2+ generations ahead of AMD Helios
  • - CUDA + NemoClaw/OpenClaw agentic AI ecosystem with Nemotron Coalition
  • - Groq 3 LPU creates dedicated inference category no competitor has answered
  • - Feynman roadmap on TSMC A16 (1.6nm) with co-packaged optics gives 2-generation visibility
  • - $1T order book through 2027 — unprecedented demand visibility
  • - Samsung foundry for Groq 3 diversifies manufacturing risk

Disadvantages

  • - TSMC N3P + CoWoS-L dependency for Rubin GPU (still sole-source for primary compute die)
  • - 12 HBM4 stacks per GPU = extreme packaging complexity and memory supply dependency
  • - Hyperscaler custom ASIC threat remains (Google TPU, Amazon Trainium, MSFT Maia)
  • - AMD 12GW committed (OpenAI 6GW + Meta 6GW) — two top-5 customers diversifying
  • - Customer concentration (top 4 = ~48% of DC revenue)
GTC 2026 investor-pres

Scenario Distribution

ScenarioProbabilityTarget PriceMultipleTriggers / Assumptions
base50%$250P/E CY2027E 25x

Triggers: FY2027E revenue $350–400B (60–85% YoY growth) — $1T order book converts at 35%+ rate | Hyperscaler AI capex sustains at $200B+ collectively through 2026

Assumptions: CY2027E non-GAAP EPS ~$10.0 — sellside most conservative; supported by Q1 FY2027 $78B guide | FY2027E revenue $350–400B (base case raised from $350-380B given $1T order backlog visibility)

bull35%$336P/E CY2027E 28x

Triggers: FY2027E revenue exceeds $420B — $1T order book supports this if conversion rate >40% | Vera Rubin NVL72 creates step-change demand; H2 2026 ramp exceeds Blackwell curve

Assumptions: CY2027E non-GAAP EPS ~$12.0 (implied: ~$520B CY2027 revenue × 56% NM / 24B shares) | FY2027E revenue $420B+ — $1T order book through 2027 provides line-of-sight

bear15%$150P/E CY2027E 20x

Triggers: Hyperscaler capex cuts >20% despite $1T order book (macro shock / AI ROI disillusionment) | AMD MI450 + custom ASIC combined capture >25% of AI training TAM — 12GW AMD commitment converts

Assumptions: CY2027E non-GAAP EPS ~$7.5 (implied: ~$270–300B CY2027 revenue × 56% NM / 24B shares) | FY2027E revenue $260–300B — order cancellations / deferrals from $1T book


Catalysts & Risks

Catalyst Timeline

GTC 2026: Vera Rubin platform + $1T order book + Uber robotaxi

highpositive2026-03-17

CONFIRMED. Exceeded expectations: 7-chip platform (not just GPU), $1T order book (doubled from $500B), Groq 3 LPU on Samsung 4nm, Uber L4 partnership (28 cities, 100K vehicles by 2028), Nemotron Coalition, Feynman roadmap preview (TSMC A16). Bull trigger validated.

Status: realized

Q1 FY2027 earnings report

highuncertain2026-05-28

First full quarter of Blackwell revenue. Key validation: does $78B guidance convert? Does $1T order book translate to sequential acceleration? Gross margin trajectory with Vera Rubin pre-production costs.

Status: pending

Vera Rubin NVL72 cloud availability (MSFT, AWS, GCP, OCI)

highpositive2026-09

H2 2026 shipping confirmed at GTC. Microsoft Azure already running first NVL72. AWS deploying 1M+ NVIDIA GPUs. Revenue conversion from $1T order book begins in earnest.

Status: pending

CoWoS-L 4x reticle capacity expansion at TSMC

highpositiveQ2 2026

Vera Rubin requires CoWoS-L 4x reticle with 12 HBM4 stacks — more complex than Blackwell packaging. TSMC targeting 130-150K wafers/month by end 2026 (up from 75K). Critical for Vera Rubin volume ramp.

Status: pending

China export control review

mediumnegative2026-06

US Commerce Dept reviewing AI chip export rules. Could restrict H20 sales to China (~$10B annual revenue).

Status: pending

AMD MI450 initial deployment at Meta/OpenAI

highnegative2026-07

12GW total AMD commitment (OpenAI 6GW + Meta 6GW). H2 2026 initial shipments. If training parity demonstrated, validates CUDA switching path. Key bear trigger to monitor.

Status: pending

Risk Matrix (Severity x Probability)

Severity \ Probability
high
medium
low
high

-

  • concentration
  • competitive
  • competitive
  • technology
medium
  • regulatory
  • competitive
  • execution
  • macro
low

-

-

-

concentration (high/medium)

Top 4 hyperscalers (MSFT, AMZN, GOOG, META) represent ~48% of Data Center revenue. Any single customer capex cut has outsized impact.

Mitigant: Growing diversification: $1T order book includes sovereign AI, enterprise, GPU clouds (CoreWeave, Vultr), and automotive (Uber L4). Nemotron Coalition adds enterprise software revenue layer.

competitive (high/medium)

AMD 12GW total commitment (OpenAI 6GW + Meta 6GW, H2 2026 initial shipments) validates CUDA-to-ROCm switching path at the two most demanding AI workloads. If either moves >25% of training to MI450, CUDA switching cost is proven surmountable.

Mitigant: Vera Rubin's 7-chip platform creates a systems-level moat AMD cannot replicate (GPU only). Groq 3 LPU addresses inference specifically. $1T order book suggests customers are committing to NVDA platform, not just GPUs. AMD deals partially strategic supply diversification.

technology (high/low)

Vera Rubin requires CoWoS-L 4x reticle interposer with 12 HBM4 stacks per GPU — unprecedented packaging complexity. Any yield issues at TSMC or HBM4 supply shortfall delays the ramp.

Mitigant: TSMC expanding CoWoS to 130-150K wafers/month. Samsung 4nm for Groq 3 LPU diversifies manufacturing. Blackwell continues shipping in parallel as bridge product.

competitive (high/medium)

Hyperscaler custom ASICs (Google TPU, Amazon Trainium, Microsoft Maia) could capture 20-30% of AI accelerator TAM by 2027. These customers are both NVDA's largest revenue source and its biggest competitive threat.

Mitigant: Groq 3 LPU directly addresses inference commoditization threat. NVLink 6 fabric is architecture-agnostic — ASIC clusters still need NVDA networking. Even as compute share erodes, platform revenue (networking + software + inference) is defensible.

regulatory (medium/high)

US export controls could further restrict China sales. H20 chip specifically designed for China market at risk.

Mitigant: China revenue already declining as % of total. Sovereign AI demand in Middle East/SE Asia/India partially offsets. GTC 2026 showed broadening geographic customer base.

execution (medium/low)

Groq 3 LPU integration risk: $20B acquisition (Dec 2025) with first product on Samsung 4nm (not TSMC). Novel architecture requires new software stack (NVIDIA Dynamo). If Groq 3 adoption disappoints, $20B goodwill impairment risk.

Mitigant: Groq 3 LPX rack already in five-rack Vera Rubin POD configuration. NVIDIA Dynamo software handles workload splitting (prefill → Rubin, decode → Groq). Samsung 4nm is proven process for inference chips.

competitive (medium/medium)

Inference commoditization: DeepSeek demonstrated 10x efficiency gains. If efficiency improvements compound annually and ASICs capture 30-40% of inference by CY2027, NVDA's inference GPU TAM shrinks even as total inference demand grows.

Mitigant: Groq 3 LPU is NVIDIA's direct answer — dedicated decode hardware with 150 TB/s SRAM bandwidth. Jevons paradox: efficiency gains expand total inference volume. Networking revenue is inference-workload-agnostic. NemoClaw/OpenClaw positions NVDA to monetize agentic AI at platform layer.

macro (medium/low)

AI capex cycle could slow if ROI metrics do not materialize. $1T order book could see cancellations/deferrals in a macro downturn.

Mitigant: $1T includes purchase order visibility and commitments (per Jensen Huang), though analysts note this also includes demand projections through 2027, not exclusively firm POs. Hyperscaler capex guided >$710B for 2026. Uber robotaxi and sovereign AI represent non-cyclical demand diversification.


Investment Memo

Analyst Conviction

Strong Buy

Upgraded from buy to strong-buy post-GTC 2026. At ~$185 (18.5x CY2027E), NVDA just revealed a 7-chip vertically integrated platform with $1T in committed orders through 2027 — yet trades at its 5-year P/E low. The GTC keynote validated the bull case thesis: Vera Rubin is in production (not a paper launch), $1T order book doubled from $500B, and Samsung fab diversification partially addresses the TSMC sole-source risk. Probability-weighted expected return increased to ~44%.

Data as of 2026-03-17

Variant Perception

Consensus View

Post-GTC, consensus is shifting bullish but still anchored to 'GPU company with ASIC risk.' Average PT ~$263-274 (42-48% upside). Street models FY2027E revenue $350-400B with DC growth decelerating to 55-65%. Most analysts upgraded targets (Jeff Pu $295, 38 Strong Buys) but still model NVDA as a GPU supplier, not a platform company.

Our View

GTC 2026 proved NVIDIA is no longer a GPU company — it is the only company shipping a complete AI factory. The 7-chip Vera Rubin platform (GPU + CPU + LPU + DPU + NIC + switch + networking) creates a systems-level moat: you don't buy NVDA GPUs, you buy NVDA data centers. The Groq 3 LPU addresses the inference commoditization thesis directly — NVDA now has dedicated decode hardware that competes with ASICs on their own turf. The Uber robotaxi deal (28 cities, 100K vehicles by 2028) opens a structural non-cyclical revenue stream outside of DC. The market still prices NVDA on 'GPU share × GPU ASP' when the correct framework is 'AI infrastructure platform revenue = compute + inference (Groq 3) + networking (NVLink 6) + software (NemoClaw) + automotive (DRIVE).'

Key Divergence

The $1T order book shifts the debate from 'will demand decelerate?' to 'can NVDA supply fast enough?' Supply (TSMC CoWoS-L 4x reticle, HBM4 ramp) is now the binding constraint, not demand. This is bullish asymmetry: supply constraints cap downside while demand visibility extends upside.

Business Quality

ROIC TrajectoryExpanding ↑

Capital Allocation

R&D ($18.5B, +54% YoY) funds a one-year product cadence: Blackwell → Vera Rubin → Feynman. The Groq acquisition ($20B, Dec 2025) is proving transformative — Groq 3 LPU on Samsung 4nm ships Q3 2026, adding a dedicated inference category that addresses the ASIC threat directly. This follows the Mellanox playbook: networking was a $1B acquisition that became $8.2B/quarter. FY2026 FCF of $96.6B funds both R&D and buybacks. Capital allocation is excellent.

Earnings Quality

FY2026 FCF $96.6B vs NI $120.1B — gap is Blackwell inventory ramp, not accrual issues. Excluding $4.5B H20 write-down, normalized NI ~$124.5B. Q4 FY2026: $34.9B FCF on $43.0B NI. Earnings quality is high and improving.

Moat Verdict

TrajectoryCompounding ↑
Durability Horizon5Y

Primary Threat

AMD 12GW committed (OpenAI + Meta, H2 2026) remains the primary competitive threat. If MI450 demonstrates training parity at scale, it proves CUDA switching is surmountable. Secondary: hyperscaler custom ASICs (Google TPU, Amazon Trainium) for inference. However, GTC 2026 significantly strengthened the moat: 7-chip platform is a systems-level barrier, Groq 3 directly counters inference ASICs, and NVLink 6 at 260 TB/s is 2+ generations ahead.

Valuation Setup

Current Multiple

~18.5x CY2027E non-GAAP EPS ($10.0) at ~$185; ~27x FY2027E consensus EPS; TTM GAAP P/E ~37x

Historical Context

TTM P/E ~37x at 5-year low (3-year avg 60.7x, 10-year median 52.8x). On forward CY2027E at 18.5x, NVDA is cheaper than semi-equipment companies (AMAT 20x, LRCX 22x) despite growing 65%+ YoY and having a $1T order book. The market is pricing 'peak GPU cycle' when GTC just demonstrated 'beginning of platform cycle.'

Risk / Reward

Favorable

Bull $336 = 82% upside (35% probability). Base $250 = 35% upside (50% probability). Bear $150 = 19% downside (15% probability). Probability-weighted expected return: (0.35 × 82%) + (0.5 × 35%) + (0.15 × -19%) = 28.7% + 17.5% - 2.9% = ~44%. Risk-reward improved from ~40% pre-GTC due to bull probability increase (30→35%) and bear decrease (20→15%). The $1T order book provides downside protection that did not exist before.

Thesis Invalidation & Monitoring

Bear Triggers

Re-evaluate if any of these occur

  • AMD MI450 demonstrates training parity at OpenAI/Meta at >1GW scale: OpenAI moves >25% of training to AMD by Q4 2026
  • Q1 FY2027 revenue misses $78B guidance or DC YoY growth decelerates to <40%
  • Vera Rubin NVL72 ramp delayed beyond H2 2026 due to CoWoS-L yield issues or HBM4 supply shortage
  • Two or more hyperscalers announce AI capex cuts >15% in Q2/Q3 2026 — order cancellations from $1T book
  • Normalized gross margin falls below 72% for two consecutive quarters
  • Groq 3 LPU adoption disappoints — <$1B revenue in first 4 quarters, signaling failed integration

Bull Confirmation

Add to position if these materialize

  • GTC 2026 Vera Rubin platform reveal with $1T order book ✅ CONFIRMED (March 17, 2026)
  • Q1 FY2027 revenue beats $78B by 5%+ and Q2 guided above $85B — sustains 10%+ sequential growth
  • Vera Rubin NVL72 achieves GA at 3+ cloud providers by Q3 2026 — validates H2 ramp timeline
  • Networking revenue sustains >$8B/quarter through FY2027, confirming platform trajectory
  • Uber DRIVE Hyperion L4 deployment begins on schedule in H1 2027 — validates automotive revenue stream
  • Sovereign AI + enterprise orders total >$30B committed through FY2028

Key Metrics to Watch

Track every quarter

  • DC revenue sequential growth rate (must sustain >8% QoQ for base case; <5% = deceleration signal)
  • Vera Rubin ramp timeline (MSFT/AWS/GCP availability dates vs H2 2026 commitment)
  • Networking revenue as % of DC revenue (rising = platform moat; falling = GPU commoditization)
  • Normalized gross margin trajectory (Q4 FY2026: 75.0%; must hold >74% through FY2027)
  • Groq 3 LPU revenue contribution (new metric — first reported Q1 FY2027 or later)
  • $1T order book conversion rate — track quarterly revenue vs implied backlog drawdown
  • AMD MI450 deployment scale at OpenAI/Meta (H2 2026 is the critical window)
  • Custom ASIC share of AI accelerator TAM (Goldman est. 35% by Q4 2026)

Executive Summary

NVIDIA Corporation — Strong Buy at ~$185 | 12-Month Base Target $250 (+35%) | Post-GTC 2026 Update BUSINESS: NVIDIA is no longer a GPU company — it is the only vertically integrated AI infrastructure platform. GTC 2026 (March 17) revealed the Vera Rubin platform: 7 chips (Rubin GPU + Vera CPU + NVLink 6 + Groq 3 LPU + ConnectX-9 + BlueField-4 + Spectrum-6), all in full production, shipping H2 2026. FY2026 revenue was $215.9B (+65% YoY), with Data Center contributing $193.7B (90%). A $1 trillion order book through 2027 — doubled from $500B at GTC 2025 — provides unprecedented demand visibility. The Groq acquisition ($20B, Dec 2025) added dedicated inference silicon (Groq 3 LPU, Samsung 4nm), creating a new product category. The Uber robotaxi partnership (28 cities, 100K vehicles by 2028) opens non-cyclical automotive revenue. Fabless model through TSMC (GPU/CPU on N3P) and Samsung (Groq 3 on 4nm). FCF of $96.6B (44.7% margin). MOAT: GTC 2026 shifted the moat from 'best GPU + CUDA' to 'only company shipping a complete AI factory in a box.' Four moat layers: (1) CUDA + NemoClaw/OpenClaw ecosystem — expanding; Nemotron Coalition (Mistral, Perplexity, Cursor, LangChain — billions committed) adds an agentic AI software lock-in layer above CUDA. (2) NVLink 6 interconnect at 260 TB/s rack bandwidth — compounding; 2+ generations ahead of AMD Helios, plus Spectrum-X Photonics with co-packaged optics (3.5x lower power). (3) Groq 3 LPU — new; dedicated decode hardware with 500MB on-chip SRAM and 150 TB/s bandwidth creates an inference moat that directly addresses the ASIC commoditization threat. (4) Systems integration — new; no competitor can ship GPU + CPU + LPU + DPU + NIC + switch as a unified platform. AMD has GPU + CPU only. Overall moat trajectory: compounding, now with more layers than ever. VARIANT VIEW: The market still prices NVDA on 'GPU share × GPU ASP' when the correct framework is 'AI infrastructure platform revenue = compute + inference (Groq 3) + networking (NVLink 6) + software (NemoClaw) + automotive (DRIVE).' GTC 2026 made this framework empirically verifiable: NVDA now ships 7 different chip types across 5 rack configurations. The Groq 3 LPU specifically addresses the inference commoditization thesis — NVDA is no longer defending GPU inference against ASICs; it has its own dedicated inference ASIC. The $1T order book shifts the debate from 'will demand decelerate?' to 'can NVDA supply fast enough?' Supply (TSMC CoWoS-L 4x reticle capacity, HBM4 ramp from 3 suppliers) is now the binding constraint, not demand. This is bullish asymmetry: supply constraints cap downside while demand visibility extends upside. Samsung 4nm for Groq 3 is meaningful foundry diversification — the first time NVDA has shipped a data center chip not fabricated by TSMC. FINANCIALS: Earnings quality is high. FY2026 FCF $96.6B vs NI $120.1B (gap is Blackwell inventory ramp). Normalized gross margins 75% in Q4 FY2026. R&D $18.5B (8.6% of revenue, improving intensity). $1T order book provides revenue visibility that no semiconductor company has ever had at this scale. VALUATION: At ~$185, NVDA trades at ~18.5x CY2027E non-GAAP EPS ($10.0). TTM P/E ~37x at the 5-year low (3-year avg 60.7x). Cheaper than semi-equipment companies (AMAT 20x, LRCX 22x) despite 65% revenue growth and $1T order book. Post-GTC analyst upgrades: Jeff Pu $295, 38 Strong Buys, consensus PT $263-274. The market has pre-emptively de-rated for ASIC disruption that GTC 2026 directly addresses with Groq 3. PW expected return ~44%. RISKS: Primary: AMD 12GW committed (OpenAI 6GW + Meta 6GW, H2 2026). If MI450 demonstrates training parity at either customer, CUDA switching is proven surmountable — but Vera Rubin's 7-chip platform creates a systems-level barrier above CUDA. Secondary: Vera Rubin CoWoS-L 4x reticle yield risk — unprecedented packaging complexity with 12 HBM4 stacks per GPU. Tertiary: Groq 3 integration risk ($20B goodwill). Mitigating: Samsung 4nm diversification, $1T committed orders, Uber/sovereign AI non-cyclical demand. DECISION: Strong Buy at ~$185. Upgraded from Buy. GTC 2026 validated the bull case: platform (not GPU), $1T orders (not guidance), production (not paper launch), and foundry diversification (Samsung). PW expected return ~44% with improved asymmetry (bear probability reduced from 20% to 15% by $1T order book downside protection). Next validation: Q1 FY2027 earnings (May 28) — if $78B converts and Vera Rubin cloud availability confirmed for H2 2026. Position sizing: full weight for base case, with clear exit discipline if AMD MI450 demonstrates training parity at >1GW scale.
▸ Thesis Change History

Thesis Change Log

2026-03-17BASE -> BASEGTC 2026 keynote (March 17, 2026) — Vera Rubin platform reveal + $1T order book

GTC 2026 post-event update. Conviction upgraded buy → strong-buy. Vera Rubin 7-chip platform in production, $1T order book (doubled), Groq 3 LPU on Samsung 4nm, Uber robotaxi L4 partnership. Bull probability increased 30→35%, bear decreased 20→15%. Price targets unchanged — GTC validates existing scenarios rather than creating new upside beyond what was modeled. TAM updated $250B→$400B (AMD estimate). Core thesis rewritten from 'dominant AI GPU' to 'only vertically integrated AI infrastructure platform.' Moat: added systems-level integration + Groq 3 inference layers.

2026-03-03BASE -> BASEData correction: FY2026 actual earnings + scenario methodology rebuild

Scenario prices completely rebuilt with correct EPS chain. Old prices ($185/$145/$90) were anchored to ~$4.5 FY2027E EPS (implied, never explicitly stored) — severely understated. New methodology: CY2027E non-GAAP EPS × forward P/E = 12-month target. Base $250 = $10 × 25x matches sellside most conservative (湖水 Feb 2026 reference). FY2026 actual revenue $215.9B (Q4 $68.1B, gross margin 75%) exceeded all three old scenario assumptions. Quarterly financials also corrected: prior file had dates shifted +1 year and was missing all 4 FY2026 quarters.

2026-02-26BULL -> BASEQ4 FY2026 earnings (Feb 26, 2026) + Q1 FY2027 guidance

Moved from bull to base after Q4 FY2026 earnings. FY2026 revenue $215.9B beat strongly but Q1 FY2027 guidance of $78B (+Q1 H20 write-down impact) creates near-term uncertainty. Gross margin guidance 75% for Q1 FY2027 is solid — Blackwell no longer a margin headwind.

▸ Monitoring Signals

Signal Intelligence

24 signals10 cross-company5 auto-populate
Leading12 signals
Leadingmonthlyauto

Taiwan Stock Exchange monthly disclosure

TSMMonthly consolidated revenue (NT$)
Leadingmonthlyauto

TrendForce / DRAMeXchange / industry checks

SKHynixHBM blended ASP trend
Leadingmonthly

DigiTimes / TrendForce / TSMC quarterly commentary

TSMCoWoS-S/CoWoS-L capacity utilization rate
Leadingquarterly

MSFT/GOOG/AMZN/META earnings calls

$710B+ collective CSP capex guided for 2026(2026-02-26)

MSFTCombined MSFT+GOOG+AMZN+META quarterly capex + forward guidance
Leadingquarterly

ASML earnings reports

ASMLNet bookings (EUV + High-NA systems)
Leadingweekly

AWS/Azure/GCP spot pricing APIs, CloudOptimizer, Vast.ai, Lambda Cloud

Leadingmonthlyauto

Taiwan Ministry of Finance monthly trade statistics

Leadingmonthlyauto

Korea Customs Service (first 10 business days flash estimate)

SKHynixKorea semiconductor export value (USD)
Leadingmonthly

SEMI.org monthly report

Leadingquarterly

SMCI, Quanta, Wistron/Wiwynn, Foxconn, Dell, HPE earnings/commentary

SMCIAI server revenue + backlog commentary
Leadingweekly

Grey market, resellers, Vast.ai, CoreWeave secondary, industry contacts

Leadingmonthly

BIS Federal Register notices, Commerce Dept speeches, Congressional hearings, diplomatic cables

Concurrent5 signals
Concurrentmonthly

Company announcements, Epoch AI compute tracker, ML papers

Concurrentquarterly

NVIDIA earnings, Arista/Broadcom commentary

ANETAI networking revenue and 800G/1.6T shipments
Concurrentquarterly

VRT/GEV/ETN earnings, utility interconnection queues, CBRE data center reports

VRTCooling and power infrastructure orders + backlog
Concurrentweeklyauto

DRAMeXchange / TrendForce

MUDDR5 8Gb spot price (USD)
Concurrentannual

Goldman Sachs / Counterpoint / SemiAnalysis

~20% (est. 35% by Q4 2026)(2026-01-15)

Lagging7 signals
Laggingquarterly

NVIDIA earnings

$35.6B Q4 FY2026 (+68% YoY); FY2026 total $193.7B(2026-02-26)

Laggingquarterly

NVIDIA earnings

$8.2B Q3 FY2026 (+162% YoY)(2026-02-26)

Laggingquarterly

NVIDIA earnings

75.0% Q4 FY2026 (normalized post H20 write-down)(2026-02-26)

Laggingquarterly

Mercury Research / SemiAnalysis

~85% data center AI accelerator share(2025-12-01)

Laggingquarterly

NVIDIA earnings

$18.5B FY2026 (8.6% of revenue, down from 9.2%)(2026-02-26)

Laggingquarterly

NVIDIA earnings

$34.9B Q4 FY2026; $96.6B FY2026 full year(2026-02-26)

Laggingquarterly

AMD earnings, hyperscaler commentary

Red Team Analysis
High ConvictionReviewed 2026-03-17
9 sections2 warnings

Post-GTC 2026 thesis update is comprehensive and well-sourced. The upgrade from buy to strong-buy is justified by the $1T order book (verified by CNBC, Wells Fargo, Goldman), Vera Rubin 7-chip platform in production (NVIDIA official, Tom's Hardware, VideoCardz), and Samsung foundry diversification (Dataconomy, SamMobile, TrendForce). Scenario math passes all checks. Key qualifications: (1) '$1T order book' should distinguish committed POs from demand projections per Moor Insights caveat; (2) Groq acquisition date confirmed Dec 24, 2025 (GPT-5.4 verified); (3) switching-cost moat still rated 'strong' despite 48% revenue-weighted erosion (unchanged from prior review's challenge). Overall: endorsed with minor factual qualifications.

Core ThesisCORROBORATESinfohigh confidence

The reframing from 'dominant AI GPU supplier' to 'sole vertically integrated AI infrastructure platform' is empirically supported by GTC 2026 announcements.

GTC 2026 revealed 7 chip types (Rubin GPU, Vera CPU, NVLink 6 Switch, ConnectX-9, BlueField-4, Spectrum-6, Groq 3 LPU) across 5 rack configurations — confirmed by NVIDIA's official developer blog ('Seven Chips, Five Rack-Scale Systems, One AI Supercomputer'). No competitor ships a comparable integrated system. AMD has GPU + CPU but lacks LPU, DPU, NIC, and switch ASICs. The 'sole vertically integrated AI infrastructure platform' claim is defensible. Minor note: NVIDIA's own platform blog counts '6 new chips' (excluding Groq 3 as acquired IP), while the POD blog counts 7 total. Both are technically correct.

Evidence

web-sourceNVIDIA Developer Blog: 'Vera Rubin POD: Seven Chips, Five Rack-Scale Systems, One AI Supercomputer' source
web-sourceNVIDIA Developer Blog: 'Inside the Vera Rubin Platform: Six New Chips, One AI Supercomputer' (counts 6 new, excluding acquired Groq 3) source
Moat AnalysisFLAGSwarninghigh confidence

Switching-cost moat still rated 'strong' despite prior review challenge. Revenue-weighted erosion concern remains valid — 48% of DC revenue from customers actively diversifying.

The prior red team review (2026-03-04) challenged the switching-cost 'strong' rating because 48% of DC revenue comes from customers building escape routes (Google TPU, Amazon Trainium, MSFT Maia, AMD-OpenAI, AMD-Meta). This update did not change the switching-cost moat entry — it remains exactly as before with 'strong' rating and the same text. The new moat layers (systems integration, Groq 3 inference) are legitimate additions, but they don't address the switching-cost bifurcation. Recommendation: either downgrade switching-cost to 'moderate' on a revenue-weighted basis, or add explicit language that 'strong' applies to the enterprise segment (52% of DC revenue) while hyperscaler switching costs are 'moderate.'

Evidence

web-sourceAMD-Meta 6GW deal (Feb 24, 2026) + AMD-OpenAI 6GW deal (Oct 2025) = 12GW total from two top-5 NVDA customers source
Scenario AnalysisCORROBORATESinfohigh confidence

Scenario math passes all checks. Probability shift (bull 30→35%, bear 20→15%) is modest and well-justified by $1T order book.

Bull: $12.0 × 28x = $336.0 ✓. Base: $10.0 × 25x = $250.0 ✓. Bear: $7.5 × 20x = $150.0 ✓. Probabilities sum to 100% (0.35 + 0.50 + 0.15 = 1.00). PW return: (0.35 × 82%) + (0.50 × 35%) + (0.15 × -19%) = 28.7% + 17.5% - 2.85% = 43.35% ≈ ~44% ✓. The probability shift is conservative — $1T order book arguably justifies a larger bull probability increase. Price targets unchanged despite GTC; this is disciplined (avoids post-event euphoria bias). Base case upper bound raised from $380B to $400B, supported by order backlog visibility.

Evidence

web-sourceWells Fargo: $1T order visibility 'beats the bogey' — supports bull probability increase source
CatalystsCORROBORATESinfohigh confidence

GTC catalyst correctly marked as confirmed. New catalysts (Vera Rubin cloud GA, CoWoS-L expansion, AMD MI450 deployment) are well-chosen with appropriate timelines.

The catalyst list is now comprehensive: 1 confirmed (GTC), 2 positive pending (Vera Rubin GA H2 2026, CoWoS expansion Q2 2026), 1 uncertain (Q1 FY2027 earnings May 28), 1 negative (export controls Jun 2026), 1 negative (AMD MI450 Jul 2026). The AMD MI450 inclusion as a negative catalyst addresses the prior review's criticism about omitting it. Timeline for Vera Rubin cloud GA (H2 2026) is consistent with GTC announcements (MSFT already running NVL72, AWS deploying 1M+ GPUs).

Evidence

web-sourceNVIDIA Newsroom: 'Vera Rubin Opens Agentic AI Frontier' — confirms H2 2026 shipping timeline source
RisksCORROBORATESinfohigh confidence

Risk coverage is thorough and improved. New Groq 3 execution risk appropriately added. Technology risk correctly updated for CoWoS-L 4x reticle complexity.

Eight risks spanning concentration, competitive (3), technology, regulatory, execution, and macro. The addition of Groq 3 execution risk ($20B goodwill impairment) demonstrates intellectual honesty — most post-GTC analyses only discuss Groq 3 positively. The CoWoS-L 4x reticle risk is technically sound: 12 HBM4 stacks per Rubin GPU is indeed unprecedented packaging complexity. The mitigants are credible (Samsung diversification, Blackwell bridge product). One gap: no explicit risk for Vera CPU competitive reception against Intel Xeon — Jensen is directly attacking Intel's $60B+ server CPU market, which could invite a competitive response.

Evidence

web-sourceTom's Hardware: Vera CPU 88-core details — positioned to compete with AMD and Intel server CPUs source
Memo: ConvictionFLAGSwarningmedium confidence

Buy → strong-buy upgrade is directionally justified but warrants qualification: the $1T figure may include demand projections, not just committed POs.

The upgrade from buy to strong-buy is driven by three factors: (1) $1T order book doubling demand visibility, (2) 7-chip platform strengthening moat, (3) Samsung foundry diversification. Factors 2 and 3 are unambiguous positives. Factor 1 requires qualification: Moor Insights & Strategy CEO Patrick Moorhead noted that '$1 trillion is more of a demand projection than a firm revenue forecast.' The thesis states '$1T is committed purchase orders, not expressions of interest' (in the macro risk mitigant) — this may be stronger language than the evidence supports. Jensen said 'purchase orders' but the composition between firm contracts and projected demand is unclear. Recommendation: qualify the $1T language as 'purchase orders and demand projections' until NVIDIA provides backlog detail in quarterly filings.

Evidence

web-sourceCNBC: Jensen Huang 'expects purchase orders between Blackwell and Vera Rubin to reach $1 trillion through 2027' source
web-sourcePatrick Moorhead (Moor Insights): '$1 trillion is more of a demand projection than a firm revenue forecast' source
Memo: Variant PerceptionCORROBORATESinfohigh confidence

Updated variant view — 'platform revenue vs GPU share' framework — is empirically strengthened by GTC 2026 seven-chip reveal.

The pre-GTC thesis argued the market misprices NVDA by using 'GPU share × GPU ASP' as the valuation framework. GTC 2026 made this argument verifiable: NVDA now ships 7 chip types across 5 rack configurations, with revenue streams spanning compute (Rubin GPU), inference (Groq 3), networking (NVLink 6/Spectrum-6), CPU (Vera), and storage/security (BlueField-4). The variant perception is no longer just analytical — it is now an observable product reality. The key divergence on supply-as-binding-constraint ($1T demand vs CoWoS/HBM4 supply) is well-articulated and supported by TSMC capacity data.

Evidence

web-sourceGoldman Sachs post-GTC: maintained Buy, noting Jensen's remarks address two core investor concerns (demand sustainability and inference transition) source
Memo: Moat VerdictCORROBORATESinfohigh confidence

Moat trajectory 'compounding' is now better supported with additional layers (systems integration, Groq 3 inference, Nemotron Coalition).

The prior review challenged 'compounding' as overstating given AMD 12GW deals. The updated thesis addresses this by adding two new moat layers: (1) systems-level integration (7-chip platform that AMD cannot replicate), (2) Groq 3 dedicated inference hardware that directly counters the ASIC threat. These additions are substantive, not cosmetic — NVDA genuinely expanded its moat surface area at GTC 2026. The 'compounding' trajectory is now more defensible because the moat is widening across more dimensions (networking + systems + inference + software) even as the compute-layer moat faces erosion. The 5-year durability horizon is reasonable given Feynman roadmap visibility (TSMC A16).

Evidence

web-sourceNextPlatform: 'Nvidia Finally Admits Why It Shelled Out $20 Billion For Groq' — validates Groq 3 as strategic moat investment source
Memo: Valuation SetupCORROBORATESinfohigh confidence

Valuation math is accurate. Post-GTC analyst upgrades support the base case target of $250.

~18.5x CY2027E at ~$185 checks out ($185/$10 = 18.5x). Consensus PT $263-274 is above the thesis base case $250, meaning the thesis is actually more conservative than the street post-GTC — this is appropriate discipline. Jeff Pu at $295, Dan Ives bullish, 38 Strong Buys, Goldman maintaining Buy — all directionally support the strong-buy upgrade. PW expected return of ~44% is correctly calculated. The comparison to AMAT 20x and LRCX 22x remains valid and is even more compelling post-GTC given the platform narrative.

Evidence

web-sourceDan Ives (Wedbush): NVDA 'alone at the top of the AI mountain' — confidence boost post-GTC source

Cross-Company References

TickerFieldComparison
TSMCoWoS-L capacityVera Rubin requires 4x reticle CoWoS-L — far more complex than Blackwell. TSMC expanding to 130-150K wafers/month by end 2026. NVDA secures >60% of TSMC CoWoS output.
TSMProcess nodeRubin GPU on TSMC N3P (full node shrink from Blackwell 4NP). Feynman previewed on TSMC A16 (1.6nm). NVDA is TSMC's largest AI customer.
AMDCompetitive threatAMD has 12GW committed (OpenAI + Meta) but only ships GPU + CPU. Cannot match NVDA's 7-chip platform or NVLink 6 at 260 TB/s. Systems-level moat gap widened at GTC 2026.
SamsungFoundry diversificationSamsung 4nm is sole fab for Groq 3 LPU — first NVIDIA DC chip not on TSMC. Also HBM4 supplier. Meaningful foundry diversification that reduces single-source risk.
MUHBM4 supplyRubin GPU requires 12 HBM4 stacks (288GB) per GPU. Micron confirmed as Vera Rubin HBM4 supplier alongside SK Hynix and Samsung. Massive HBM4 volume driver.
WHAT CHANGEDQ3Q4 FY2026
📈Revenue +19.5% QoQ($57.0B → $68.1B)THESIS-LINKED

Revenue grew 19.5% QoQ to $68.1B.

Revenue grew 19.5% QoQ to $68.1B, above the 7Q average of 14.8%.

📈Diluted EPS +35.4% QoQ($1.30 → $1.76)OUTLIER

Diluted EPS grew 35.4% QoQ to 1.8%.

Diluted EPS rose 35.4% QoQ to $1.76.

📈Net Margin +7.1pp(56.0% → 63.1%)OUTLIER

Net Margin expanded 710bp QoQ to 63.1%.

Research Coverage

100%

Profile

Company description, market, key metrics

Complete

Financials

8Q / 2Y

Complete

Thesis

3 scenarios

Complete

Relationships

10 tracked links

Complete

Notes

6 entries

Complete

Dependency Map

Supplier Criticality Matrix

Use this map to identify which upstream earnings may change your view first.

sole-source
primary
secondary
commodity
Disclosed — regulatory filing Confirmed — company-stated Estimated — third-party sourced
View detailed table
TickerProductCriticalitySpend
SKHynixHBM4 memory for Vera Rubin (288GB/GPU, 22 TB/s BW, 12 stacks per GPU via CoWoS-L)primary$10.0KM
MUHBM4 memory for Vera Rubin (secondary supplier ramping)secondary$5.0KM
TSMWafer fabrication (N3P for Rubin GPU/Vera CPU, 4NP for Blackwell) + CoWoS-L 4x reticle packagingsole-source$22.0KM
AMKRAdvanced packaging (2.5D CoWoS alternative)secondary-
SamsungGroq 3 LPU fabrication (Samsung 4nm) + HBM4 memory (tertiary supplier)secondary$3.0KM
Sources: GTC 2026 investor-pres (TSM, SKHynix, MU, Samsung); FY2026 est. (AMKR)

Customer Concentration

Top customers by reported/estimated exposure.

FY2026 10-KFY2026 10-KFY2026 10-K

Workbench

Financial Workbench

Financial Summary

MetricQ4 FY2026Q3 FY2026Q2 FY2026Q1 FY2026Q4 FY2025Q3 FY2025Q2 FY2025Q1 FY2025
Revenue Metrics
Revenue$68,127$57,006$46,743$44,062$39,331$35,082$30,040$26,044
Gross Profit$51,093$41,849$33,853$26,668$28,723$26,156$22,574$20,406
Gross Margin %75.0%73.4%72.4%60.5%73.0%74.6%75.1%78.4%
Profitability
Operating Income$44,299$36,010$28,440$21,638$24,034$21,869$18,642$16,909
Op Margin %65.0%63.2%60.8%49.1%61.1%62.3%62.1%64.9%
Net Income$42,960$31,910$26,422$18,775$22,091$19,309$16,599$14,881
Net Margin %63.1%56.0%56.5%42.6%56.2%55.0%55.3%57.1%
EPS$1.76$1.30$1.08$0.76$0.89$0.78$0.67$0.60
EBITDA
Cash Flow
Operating Cash Flow$36,190$23,751$15,365$27,414$16,628$17,627$14,488$15,345
Capex$1,284$1,636$1,895$1,227$1,077$813$977$369
Free Cash Flow$34,906$22,115$13,470$26,187$15,551$16,814$13,511$14,976
Shares (M)24.4B24.5B24.5B24.6B24.7B24.8B24.8B24.9B
▸ Relationship Network

Coverage Gap Notice

In edges only: AMKR, META

In relationships only: AMD, UBER

Relationship Network

Force-directed network. Edge color = relationship type, edge width = significance.

suppliercustomercompetitorpartnerinvestorinvestee
▸ Relationship Ledger

Relationship Ledger

TargetTypeSignificanceDescriptionSource

TSM

TSMC

suppliercritical

Sole leading-edge foundry partner. Manufactures all NVIDIA GPUs on 4N/5nm (H100) and 4NP (B200) process nodes. CoWoS advanced packaging is the primary supply bottleneck.

Contracts / Investments / History

Contracts: 2

- Long-term wafer supply agreement for leading-edge nodes (4N, 4NP, N3E) (active)

- CoWoS-L packaging capacity reservation for B200/GB200 (active)

Investments: 0

History: 2

- 2024-01-01: TSMC began 4NP mass production for B200

- 2025-06-01: CoWoS capacity doubled at TSMC

FY2026 10-K

SKHynix

SKHynix

suppliercritical

Primary HBM3E memory supplier with 50%+ share of NVIDIA's HBM allocation. Critical for B200 (8x HBM3E stacks per GPU, 192GB).

Contracts / Investments / History

Contracts: 1

- HBM3E supply agreement with volume guarantees (active)

Investments: 0

History: 2

- 2024-03-01: SKHynix qualified HBM3E for H100/H200

- 2025-09-01: SKHynix began HBM4 samples for next-gen GPU

Q4 FY2025 earnings-call

MSFT

Microsoft

customercritical

Largest single customer (~15% of revenue). Azure is the primary cloud platform for NVIDIA GPU instances. Deep partnership on AI infrastructure.

Contracts / Investments / History

Contracts: 1

- Multi-year GPU supply agreement for Azure AI infrastructure (active)

Investments: 0

History: 1

- 2023-01-01: Microsoft committed $10B+ to OpenAI, driving massive GPU demand

FY2026 est.

AMD

AMD

competitormajor

Primary GPU competitor. 12GW total committed (OpenAI 6GW + Meta 6GW, H2 2026). MI450 + Helios rack architecture. However, AMD has GPU + CPU only — cannot match NVDA's 7-chip platform (no LPU, no DPU, no switch ASIC, no photonics). NVLink 6 at 260 TB/s is 2+ generations ahead of AMD interconnect.

Contracts / Investments / History

Contracts: 0

Investments: 0

History: 3

- 2024-12-01: AMD launched MI300X, first credible AI GPU competitor

- 2025-10-06: AMD-OpenAI 6GW partnership announced (MI450, H2 2026)

- 2026-02-24: AMD-Meta 6GW partnership announced (custom MI450, Helios rack, H2 2026)

2026 report

AMZN

Amazon (AWS)

customercritical

Second largest customer (~13% of revenue). AWS P5/P6 GPU instances powered by NVIDIA. Also developing Trainium custom ASIC as potential substitute.

Contracts / Investments / History

Contracts: 0

Investments: 0

History: 2

- 2025-01-01: AWS launched P6 instances with B200 GPUs

- 2025-09-01: Amazon Trainium3 announced

FY2026 est.

GOOG

Alphabet (GCP)

customermajor

~10% of revenue via GCP A3/A3+ GPU instances. Google also develops TPU custom ASICs, making them both customer and competitive threat.

Contracts / Investments / History

Contracts: 0

Investments: 0

History: 1

- 2025-04-01: Google announced TPUv6 with competitive training performance

FY2026 est.

MU

Micron Technology

suppliermajor

Secondary HBM3E supplier qualifying for B200+. Ramping HBM3E production to diversify NVIDIA's memory supply away from SKHynix concentration.

Contracts / Investments / History

Contracts: 1

- HBM3E qualification and volume ramp agreement (active)

Investments: 0

History: 0

FY2026 est.

SMCI

Super Micro Computer

customermajor

Key GPU server integrator. Designs and assembles AI server racks using NVIDIA GPUs + NVLink. ~3% of NVIDIA revenue. Vera Rubin-based systems shipping H2 2026.

Contracts / Investments / History

Contracts: 0

Investments: 0

History: 0

FY2026 est.

UBER

Uber Technologies

customermajor

GTC 2026 partnership: NVIDIA + Uber scaling world's largest Level 4 robotaxi network. DRIVE AGX Hyperion 10 + DRIVE AV software + Alpamayo reasoning model. 28 cities, 4 continents by 2028, targeting 100K vehicles.

Contracts / Investments / History

Contracts: 1

- DRIVE Hyperion L4 autonomous driving platform deployment for Uber robotaxi fleet (active)

Investments: 0

History: 1

- 2026-03-17: NVIDIA-Uber robotaxi partnership announced at GTC 2026

GTC 2026 investor-pres

Samsung

Samsung Electronics

suppliermajor

Sole foundry for Groq 3 LPU on Samsung 4nm — first NVIDIA data center chip not fabricated by TSMC. Also tertiary HBM4 supplier for Vera Rubin GPU. Meaningful foundry diversification.

Contracts / Investments / History

Contracts: 2

- Samsung 4nm wafer fabrication for Groq 3 / LP30 LPU chips (active)

- HBM4 memory supply for Vera Rubin (tertiary behind SK Hynix and Micron) (active)

Investments: 0

History: 2

- 2026-02-01: Samsung began mass production of HBM4 for Vera Rubin

- 2026-03-17: Samsung confirmed as sole fab for Groq 3 LPU at GTC 2026

GTC 2026 investor-pres
▸ Research Notes (6)

Research Notes Timeline

GTC 2026: Vera Rubin Platform — 7 Chips, $1T Order Book, Agentic AI Era

NVIDIA GTC 2026 keynote (March 17). Jensen Huang unveiled the Vera Rubin platform — the most comprehensive product launch in NVIDIA history. **Vera Rubin Platform (7 chips, all in full production, shipping H2 2026):** - Rubin GPU: 336B transistors (dual-die), TSMC N3P, 288GB HBM4, 22 TB/s bandwidth, 50 PFLOPS FP4 - Vera CPU: 88 custom Arm Olympus cores, 1.5 TB LPDDR5X, first CPU with FP8 — directly targeting Intel Xeon - NVLink 6 Switch: 3.6 TB/s per GPU, 260 TB/s rack bandwidth - ConnectX-9 SuperNIC + BlueField-4 DPU + Spectrum-6 Ethernet Switch - Groq 3 LPU: 500MB on-chip SRAM, 150 TB/s SRAM bandwidth, Samsung 4nm — dedicated decode/inference **Five rack configurations:** NVL72 (training), Groq 3 LPX (decode), Vera CPU Rack (RL/orchestration), BlueField-4 STX (KV-cache), Spectrum-6 SPX (networking). Full POD: 40 racks, 1,152 Rubin GPUs, ~20,000 NVIDIA dies, 60 exaflops. **Performance claims:** 10x perf/watt vs Grace Blackwell. 35x inference throughput/MW (Vera Rubin + Groq combined). Token cost reduction up to 10x. **$1 trillion order book through 2027** — doubled from $500B at GTC 2025. Directly dispels AI capex peak concerns. **Cloud partners confirmed:** Microsoft Azure (first NVL72 running), AWS (1M+ NVIDIA GPUs deployed), Google Cloud, Oracle Cloud, CoreWeave, Vultr. OEMs: Dell, HPE, Lenovo, Supermicro. **Feynman architecture preview (next-gen):** TSMC A16 (1.6nm), Rosa CPU, LP40 LPU, Kyber interconnect with co-packaged optics. **Autonomous driving:** NVIDIA + Uber robotaxi partnership — 28 cities, 4 continents by 2028, 100K vehicles. BYD, Hyundai/Kia, Nissan, Geely, Isuzu adopting DRIVE Hyperion for Level 4. **Agentic AI software:** NemoClaw/OpenClaw framework, Nemotron Coalition (Mistral, Perplexity, Cursor, LangChain, Black Forest Labs — "billions" committed). Six frontier model families. **DLSS 5:** AI-powered neural rendering for gaming, Fall 2026 driver update. **Supply chain implications:** - TSMC N3P (full node shrink) + 4x reticle CoWoS-L with 12 HBM4 stacks = massive packaging demand - Samsung 4nm for Groq 3 = meaningful foundry diversification away from TSMC sole-source - HBM4 demand from SK Hynix, Samsung, Micron all confirmed - Every Vera Rubin POD = ~20,000 dies requiring advanced packaging - Feynman on TSMC A16 gives 2-generation roadmap visibility **Thesis impact:** Strongly bullish. $1T order book validates bull case revenue trajectory. 7-chip systems-level platform creates a moat AMD cannot replicate (they have GPU only). Groq 3 LPU addresses inference commoditization risk directly. Uber/automotive opens structural growth beyond data center. Samsung diversification partially mitigates TSMC sole-source risk.

GTC-2026vera-rubingroq-3order-bookplatform-moatautonomous-drivingfeynmanGTC 2026 investor-pres

Deep Research: NVDA — AI Platform Monopoly at a 5-Year P/E Low

Completed deep research on NVIDIA Corporation. **Business & Revenue:** Dominant AI GPU platform with 85%+ data center AI accelerator share. FY2026 revenue $215.9B (+65% YoY). Data Center $193.7B (90% of revenue, +68% YoY). Fabless model through TSMC; $96.6B FCF (44.7% margin). Gaming $16B (7%), ProViz $3.2B, Auto $2.3B. **Financial Trajectory:** Q4 FY2026 revenue $68.1B with 75.0% GM (normalized post-Q1 H20 write-down). Q1 FY2027 guided $78B — annualized run-rate >$312B. FY2026 non-GAAP EPS ~$4.97 (implied from quarterly data). R&D $18.5B (8.6% of revenue, declining intensity). FCF $96.6B vs NI $120.1B — gap is working capital investment for Blackwell ramp, not accrual issues. **Core Thesis:** Buy at $181. At 18.1x CY2027E non-GAAP EPS ($10), NVDA is at its 5-year P/E low (3-year avg 60.7x, 10-year median 52.8x) despite guiding Q1 at $78B with 75% GM. The market has pre-emptively de-rated for ASIC disruption and growth deceleration that have not yet materialized in quarterly results. Probability-weighted expected return ~40%. **Moat Assessment:** Three-layered moat with different trajectories: (1) CUDA ecosystem — stable; 4M+ developers, dominant for 80% of market but hyperscalers building escape routes; (2) NVLink/NVSwitch interconnect IP — compounding; networking revenue $8.2B/quarter (+162% YoY), defines rack-scale architecture even ASIC clusters depend on; (3) Scale economics — stable; $18.5B R&D amortized across massive volume. Overall trajectory: compounding, because networking/platform layer grows faster than compute erosion. Biggest threat: AMD-OpenAI 6GW deal (H2 2026) — if OpenAI moves >25% of training to AMD, it validates CUDA switching is surmountable. **Valuation:** Stock $181 (March 2026). 18.1x CY2027E P/E ($10 non-GAAP EPS). Cheaper than semi-equipment companies (AMAT 20x, LRCX 22x) despite 65% YoY revenue growth. TTM GAAP P/E 36.7x at 5-year low. Base target $250 (+38%). PW expected return ~40%. **Scenarios:** Bull $336 (30%, CY2027E $12 EPS × 28x — FY2027 rev >$420B, Rubin step-change demand). Base $250 (50%, $10 × 25x — FY2027 $350-380B, hyperscaler capex sustains). Bear $150 (20%, $7.5 × 20x — capex cuts >20%, ASIC share accelerates to 20%+). **Key Catalysts:** - GTC 2026 (Mar 17): Blackwell Ultra/Vera Rubin reveal — potential 2x inference performance leap - Q1 FY2027 earnings (May 28): First full Blackwell quarter, DC growth trajectory - CoWoS expansion (Q2 2026): TSMC 2x capacity eases primary supply bottleneck - China export control review (Jun 2026): Could restrict H20 (~$10B annual revenue) **Key Risks:** - AMD-OpenAI 6GW MI450 partnership (H2 2026): Validates CUDA switching path at most demanding workloads — single most important bear signal - Hyperscaler custom ASIC adoption: Google TPU, Amazon Trainium, MSFT Maia — Goldman est. 35% inference share by Q4 2026 - Customer concentration: Top 4 hyperscalers = ~48% of DC revenue; any single capex cut has outsized impact - CoWoS sole-source: TSMC packaging bottleneck; any disruption halts GPU production **Supply Chain:** Central node in AI semiconductor ecosystem. Sole-source TSMC for CoWoS-L packaging (binding constraint). SKHynix (~50% share) and Micron (~20%) for HBM3E/HBM4. Samsung tertiary HBM. Downstream: MSFT (15%), AMZN (13%), GOOG (10%), META (10%) of DC revenue. NVLink fabric is architecture-agnostic — even ASIC-based clusters depend on NVDA networking.

deep-researchthesisAI-GPUplatform-moatvaluationQ1 CY2026 report

FY2026 Annual Earnings: $215.9B Revenue, Blackwell Full Ramp

FY2026 (ended Jan 2026) total revenue $215.9B, up 65% YoY. Data Center $193.7B (+68%), Gaming $16.0B (+41%), ProViz $3.2B (+70%), Auto $2.3B (+39%). Full-year GAAP gross margin 71.1% depressed by Q1 $4.5B H20 export-control inventory write-down; normalized ~75%. Q4 FY2026 alone was $68.1B revenue with 75.0% GM, demonstrating margin normalization post write-down. Q1 FY2027 guided $78B revenue. Blackwell/GB200 fully ramping and supply-constrained through FY2027. CoWoS packaging capacity expanding 2x in CY2026 but remains the binding constraint.

earningsFY2026data-centerblackwellFY2026 10-K

FY2025 Annual Earnings: DC Revenue $115B, B200 Ramp Begins

FY2025 (ended Jan 2025) total revenue $130.5B, up 114% YoY. Data Center $115.2B (+142% YoY from $47.5B in FY2024). B200/GB200 first shipments in Q4 FY2025, expected to be supply-constrained through FY2026. CoWoS packaging remains the primary bottleneck. Gross margin 75.1%. R&D rose to $12.0B as CUDA and Blackwell software investment scaled.

earningsFY2025B200data-centerFY2025 10-K

Blackwell Architecture Deep Dive

B200 uses TSMC 4NP process with 208B transistors. Dual-die design connected via 10 TB/s NV-HBI. Each B200 requires 8x HBM3E stacks (192GB). CoWoS-L packaging from TSMC is sole-source. Inference performance 30x vs H100 for LLMs.

architectureB200CoWoS

Customer Concentration Risk

Top 4 hyperscalers (MSFT, AMZN, GOOG, META) represent ~48% of Data Center revenue. CoreWeave and other GPU cloud providers adding ~10%. Sovereign AI customers (Middle East, SE Asia) are a growing diversification vector but remain <5% of DC.

riskconcentrationhyperscaler

Source Mix

10-K2
unspecified2
investor-presentation1
industry-report1
▸ Data Quality

Data Quality

Freshness

Profile update: Fresh (6d)

Financial update: Fresh (26d)

Completeness

  • [ok] Profile - Company description, market, key metrics
  • [ok] Financials - 8Q / 2Y
  • [ok] Thesis - 3 scenarios
  • [ok] Relationships - 10 tracked links
  • [ok] Notes - 6 entries

Consistency Checks

  • [warn] In edges only: AMKR, META
  • [warn] In relationships only: AMD, UBER
  • [ok] Thesis has substantive content.
  • [ok] No stale pending catalysts detected.

EDGAR Validation

CLEAN (19d ago)

10 periods, 150/150 fields matched

Source: SEC EDGAR XBRL (CIK 0001045810)