GTC 2026 post-event update. Conviction upgraded buy → strong-buy. Vera Rubin 7-chip platform in production, $1T order book (doubled), Groq 3 LPU on Samsung 4nm, Uber robotaxi L4 partnership. Bull probability increased 30→35%, bear decreased 20→15%. Price targets unchanged — GTC validates existing scenarios rather than creating new upside beyond what was modeled. TAM updated $250B→$400B (AMD estimate). Core thesis rewritten from 'dominant AI GPU' to 'only vertically integrated AI infrastructure platform.' Moat: added systems-level integration + Groq 3 inference layers.
NVDA
NVIDIA CorporationMarket Cap
$4.3T
Dominant AI infrastructure platform. Designs the Vera Rubin system (7 chips: Rubin GPU + Vera CPU + NVLink 6 + ConnectX-...
Key Conclusion
NVIDIA has evolved from a dominant AI GPU supplier into the sole vertically integrated AI infrastructure platform. GTC 2026 revealed Vera Rubin — a 7-chip system (Rubin GPU + Vera CPU + NVLink 6 + Groq 3 LPU + ConnectX-9 + BlueField-4 + Spectrum-6) in full production, shipping H2 2026. The $1T order book through 2027 (doubled from $500B at GTC 2025) validates multi-year demand. Samsung 4nm fabrication of Groq 3 LPU partially diversifies TSMC sole-source risk. The moat has shifted from 'best GPU + CUDA' to 'only company shipping a complete AI factory in a box' — a systems-level advantage AMD cannot replicate.
Investment Thesis
Core Thesis
BASE scenarioNVIDIA has evolved from a dominant AI GPU supplier into the sole vertically integrated AI infrastructure platform. GTC 2026 revealed Vera Rubin — a 7-chip system (Rubin GPU + Vera CPU + NVLink 6 + Groq 3 LPU + ConnectX-9 + BlueField-4 + Spectrum-6) in full production, shipping H2 2026. The $1T order book through 2027 (doubled from $500B at GTC 2025) validates multi-year demand. Samsung 4nm fabrication of Groq 3 LPU partially diversifies TSMC sole-source risk. The moat has shifted from 'best GPU + CUDA' to 'only company shipping a complete AI factory in a box' — a systems-level advantage AMD cannot replicate.
Moat Analysis
network-effect
strongCUDA + NemoClaw/OpenClaw agentic AI ecosystem. 4M+ developers, 800+ AI libraries, now extended by Nemotron Coalition (Mistral, Perplexity, Cursor, LangChain, Black Forest Labs — 'billions' committed at GTC 2026). Trajectory: expanding at the agent layer — GTC 2026 positioned NVIDIA as the agentic AI platform, not just compute. Enterprise CUDA lock-in stable; hyperscalers still diversifying via Triton, ROCm 7.2, and custom ASICs. AMD-OpenAI (6GW) and AMD-Meta (6GW) validate switching path exists at top tier, but the NemoClaw/OpenClaw agent framework adds a new software lock-in layer above CUDA.
switching-cost
strongEnterprise AI deployments are built on CUDA; migration requires rewriting inference pipelines and requalifying performance. Trajectory: stable — switching costs remain massive for enterprises and tier-2 clouds, but the top 5 customers (48% of DC revenue) are actively reducing CUDA dependency via custom ASICs (Google TPU at 78% of internal AI servers) and AMD partnerships (OpenAI). The market is bifurcating: unbreakable switching costs for 80% of customers, weakening for 48% of revenue.
intangible-assets
strongNVLink 6 interconnect IP at 3.6 TB/s per GPU / 260 TB/s per rack — a generational leap that competitors cannot match. Vera Rubin NVL72 rack is shipping H2 2026; Feynman generation previewed with Kyber co-packaged optics interconnect. Trajectory: compounding — networking revenue hit $8.2B/quarter (+162% YoY) and NVLink 6 extends the lead further. AMD Helios racks (MI450) are 2+ generations behind on interconnect bandwidth. Additionally, Spectrum-X Photonics and Quantum-X Photonics (co-packaged optics: 3.5x lower power, 63x better signal integrity) create a silicon photonics moat for the scale-out layer.
cost-advantage
strongSystems-level scale economics: 7-chip platform (Rubin GPU + Vera CPU + NVLink 6 + Groq 3 LPU + ConnectX-9 + BlueField-4 + Spectrum-6) amortizes $18.5B R&D across $215.9B revenue. No competitor can match the breadth: AMD has GPU + CPU but no LPU, no DPU, no switch ASIC, no photonics. The Groq acquisition ($20B) added dedicated inference silicon that creates a new product category. $1T order book through 2027 provides unmatched volume to amortize next-gen R&D (Feynman on TSMC A16). Samsung 4nm for Groq 3 diversifies manufacturing, reducing TSMC concentration risk.
Competitive Position
Market share
85%
TAM
$400,000M
SAM %
65%
Share trend
stable
Advantages
- - Only vertically integrated AI infrastructure platform (7-chip system: GPU + CPU + LPU + DPU + NIC + switches)
- - NVLink 6 at 260 TB/s rack bandwidth — 2+ generations ahead of AMD Helios
- - CUDA + NemoClaw/OpenClaw agentic AI ecosystem with Nemotron Coalition
- - Groq 3 LPU creates dedicated inference category no competitor has answered
- - Feynman roadmap on TSMC A16 (1.6nm) with co-packaged optics gives 2-generation visibility
- - $1T order book through 2027 — unprecedented demand visibility
- - Samsung foundry for Groq 3 diversifies manufacturing risk
Disadvantages
- - TSMC N3P + CoWoS-L dependency for Rubin GPU (still sole-source for primary compute die)
- - 12 HBM4 stacks per GPU = extreme packaging complexity and memory supply dependency
- - Hyperscaler custom ASIC threat remains (Google TPU, Amazon Trainium, MSFT Maia)
- - AMD 12GW committed (OpenAI 6GW + Meta 6GW) — two top-5 customers diversifying
- - Customer concentration (top 4 = ~48% of DC revenue)
Scenario Distribution
| Scenario | Probability | Target Price | Multiple | Triggers / Assumptions |
|---|---|---|---|---|
| base | 50% | $250 | P/E CY2027E 25x | Triggers: FY2027E revenue $350–400B (60–85% YoY growth) — $1T order book converts at 35%+ rate | Hyperscaler AI capex sustains at $200B+ collectively through 2026 Assumptions: CY2027E non-GAAP EPS ~$10.0 — sellside most conservative; supported by Q1 FY2027 $78B guide | FY2027E revenue $350–400B (base case raised from $350-380B given $1T order backlog visibility) |
| bull | 35% | $336 | P/E CY2027E 28x | Triggers: FY2027E revenue exceeds $420B — $1T order book supports this if conversion rate >40% | Vera Rubin NVL72 creates step-change demand; H2 2026 ramp exceeds Blackwell curve Assumptions: CY2027E non-GAAP EPS ~$12.0 (implied: ~$520B CY2027 revenue × 56% NM / 24B shares) | FY2027E revenue $420B+ — $1T order book through 2027 provides line-of-sight |
| bear | 15% | $150 | P/E CY2027E 20x | Triggers: Hyperscaler capex cuts >20% despite $1T order book (macro shock / AI ROI disillusionment) | AMD MI450 + custom ASIC combined capture >25% of AI training TAM — 12GW AMD commitment converts Assumptions: CY2027E non-GAAP EPS ~$7.5 (implied: ~$270–300B CY2027 revenue × 56% NM / 24B shares) | FY2027E revenue $260–300B — order cancellations / deferrals from $1T book |
Catalysts & Risks
Catalyst Timeline
GTC 2026: Vera Rubin platform + $1T order book + Uber robotaxi
highpositive2026-03-17CONFIRMED. Exceeded expectations: 7-chip platform (not just GPU), $1T order book (doubled from $500B), Groq 3 LPU on Samsung 4nm, Uber L4 partnership (28 cities, 100K vehicles by 2028), Nemotron Coalition, Feynman roadmap preview (TSMC A16). Bull trigger validated.
Status: realized
Q1 FY2027 earnings report
highuncertain2026-05-28First full quarter of Blackwell revenue. Key validation: does $78B guidance convert? Does $1T order book translate to sequential acceleration? Gross margin trajectory with Vera Rubin pre-production costs.
Status: pending
Vera Rubin NVL72 cloud availability (MSFT, AWS, GCP, OCI)
highpositive2026-09H2 2026 shipping confirmed at GTC. Microsoft Azure already running first NVL72. AWS deploying 1M+ NVIDIA GPUs. Revenue conversion from $1T order book begins in earnest.
Status: pending
CoWoS-L 4x reticle capacity expansion at TSMC
highpositiveQ2 2026Vera Rubin requires CoWoS-L 4x reticle with 12 HBM4 stacks — more complex than Blackwell packaging. TSMC targeting 130-150K wafers/month by end 2026 (up from 75K). Critical for Vera Rubin volume ramp.
Status: pending
China export control review
mediumnegative2026-06US Commerce Dept reviewing AI chip export rules. Could restrict H20 sales to China (~$10B annual revenue).
Status: pending
AMD MI450 initial deployment at Meta/OpenAI
highnegative2026-0712GW total AMD commitment (OpenAI 6GW + Meta 6GW). H2 2026 initial shipments. If training parity demonstrated, validates CUDA switching path. Key bear trigger to monitor.
Status: pending
Risk Matrix (Severity x Probability)
-
- concentration
- competitive
- competitive
- technology
- regulatory
- competitive
- execution
- macro
-
-
-
concentration (high/medium)
Top 4 hyperscalers (MSFT, AMZN, GOOG, META) represent ~48% of Data Center revenue. Any single customer capex cut has outsized impact.
Mitigant: Growing diversification: $1T order book includes sovereign AI, enterprise, GPU clouds (CoreWeave, Vultr), and automotive (Uber L4). Nemotron Coalition adds enterprise software revenue layer.
competitive (high/medium)
AMD 12GW total commitment (OpenAI 6GW + Meta 6GW, H2 2026 initial shipments) validates CUDA-to-ROCm switching path at the two most demanding AI workloads. If either moves >25% of training to MI450, CUDA switching cost is proven surmountable.
Mitigant: Vera Rubin's 7-chip platform creates a systems-level moat AMD cannot replicate (GPU only). Groq 3 LPU addresses inference specifically. $1T order book suggests customers are committing to NVDA platform, not just GPUs. AMD deals partially strategic supply diversification.
technology (high/low)
Vera Rubin requires CoWoS-L 4x reticle interposer with 12 HBM4 stacks per GPU — unprecedented packaging complexity. Any yield issues at TSMC or HBM4 supply shortfall delays the ramp.
Mitigant: TSMC expanding CoWoS to 130-150K wafers/month. Samsung 4nm for Groq 3 LPU diversifies manufacturing. Blackwell continues shipping in parallel as bridge product.
competitive (high/medium)
Hyperscaler custom ASICs (Google TPU, Amazon Trainium, Microsoft Maia) could capture 20-30% of AI accelerator TAM by 2027. These customers are both NVDA's largest revenue source and its biggest competitive threat.
Mitigant: Groq 3 LPU directly addresses inference commoditization threat. NVLink 6 fabric is architecture-agnostic — ASIC clusters still need NVDA networking. Even as compute share erodes, platform revenue (networking + software + inference) is defensible.
regulatory (medium/high)
US export controls could further restrict China sales. H20 chip specifically designed for China market at risk.
Mitigant: China revenue already declining as % of total. Sovereign AI demand in Middle East/SE Asia/India partially offsets. GTC 2026 showed broadening geographic customer base.
execution (medium/low)
Groq 3 LPU integration risk: $20B acquisition (Dec 2025) with first product on Samsung 4nm (not TSMC). Novel architecture requires new software stack (NVIDIA Dynamo). If Groq 3 adoption disappoints, $20B goodwill impairment risk.
Mitigant: Groq 3 LPX rack already in five-rack Vera Rubin POD configuration. NVIDIA Dynamo software handles workload splitting (prefill → Rubin, decode → Groq). Samsung 4nm is proven process for inference chips.
competitive (medium/medium)
Inference commoditization: DeepSeek demonstrated 10x efficiency gains. If efficiency improvements compound annually and ASICs capture 30-40% of inference by CY2027, NVDA's inference GPU TAM shrinks even as total inference demand grows.
Mitigant: Groq 3 LPU is NVIDIA's direct answer — dedicated decode hardware with 150 TB/s SRAM bandwidth. Jevons paradox: efficiency gains expand total inference volume. Networking revenue is inference-workload-agnostic. NemoClaw/OpenClaw positions NVDA to monetize agentic AI at platform layer.
macro (medium/low)
AI capex cycle could slow if ROI metrics do not materialize. $1T order book could see cancellations/deferrals in a macro downturn.
Mitigant: $1T includes purchase order visibility and commitments (per Jensen Huang), though analysts note this also includes demand projections through 2027, not exclusively firm POs. Hyperscaler capex guided >$710B for 2026. Uber robotaxi and sovereign AI represent non-cyclical demand diversification.
Investment Memo
Analyst Conviction
Strong Buy
Upgraded from buy to strong-buy post-GTC 2026. At ~$185 (18.5x CY2027E), NVDA just revealed a 7-chip vertically integrated platform with $1T in committed orders through 2027 — yet trades at its 5-year P/E low. The GTC keynote validated the bull case thesis: Vera Rubin is in production (not a paper launch), $1T order book doubled from $500B, and Samsung fab diversification partially addresses the TSMC sole-source risk. Probability-weighted expected return increased to ~44%.
Data as of 2026-03-17
Variant Perception
Consensus View
Post-GTC, consensus is shifting bullish but still anchored to 'GPU company with ASIC risk.' Average PT ~$263-274 (42-48% upside). Street models FY2027E revenue $350-400B with DC growth decelerating to 55-65%. Most analysts upgraded targets (Jeff Pu $295, 38 Strong Buys) but still model NVDA as a GPU supplier, not a platform company.
Our View
GTC 2026 proved NVIDIA is no longer a GPU company — it is the only company shipping a complete AI factory. The 7-chip Vera Rubin platform (GPU + CPU + LPU + DPU + NIC + switch + networking) creates a systems-level moat: you don't buy NVDA GPUs, you buy NVDA data centers. The Groq 3 LPU addresses the inference commoditization thesis directly — NVDA now has dedicated decode hardware that competes with ASICs on their own turf. The Uber robotaxi deal (28 cities, 100K vehicles by 2028) opens a structural non-cyclical revenue stream outside of DC. The market still prices NVDA on 'GPU share × GPU ASP' when the correct framework is 'AI infrastructure platform revenue = compute + inference (Groq 3) + networking (NVLink 6) + software (NemoClaw) + automotive (DRIVE).'
Key Divergence
The $1T order book shifts the debate from 'will demand decelerate?' to 'can NVDA supply fast enough?' Supply (TSMC CoWoS-L 4x reticle, HBM4 ramp) is now the binding constraint, not demand. This is bullish asymmetry: supply constraints cap downside while demand visibility extends upside.
Business Quality
Capital Allocation
R&D ($18.5B, +54% YoY) funds a one-year product cadence: Blackwell → Vera Rubin → Feynman. The Groq acquisition ($20B, Dec 2025) is proving transformative — Groq 3 LPU on Samsung 4nm ships Q3 2026, adding a dedicated inference category that addresses the ASIC threat directly. This follows the Mellanox playbook: networking was a $1B acquisition that became $8.2B/quarter. FY2026 FCF of $96.6B funds both R&D and buybacks. Capital allocation is excellent.
Earnings Quality
FY2026 FCF $96.6B vs NI $120.1B — gap is Blackwell inventory ramp, not accrual issues. Excluding $4.5B H20 write-down, normalized NI ~$124.5B. Q4 FY2026: $34.9B FCF on $43.0B NI. Earnings quality is high and improving.
Moat Verdict
Primary Threat
AMD 12GW committed (OpenAI + Meta, H2 2026) remains the primary competitive threat. If MI450 demonstrates training parity at scale, it proves CUDA switching is surmountable. Secondary: hyperscaler custom ASICs (Google TPU, Amazon Trainium) for inference. However, GTC 2026 significantly strengthened the moat: 7-chip platform is a systems-level barrier, Groq 3 directly counters inference ASICs, and NVLink 6 at 260 TB/s is 2+ generations ahead.
Valuation Setup
Current Multiple
~18.5x CY2027E non-GAAP EPS ($10.0) at ~$185; ~27x FY2027E consensus EPS; TTM GAAP P/E ~37x
Historical Context
TTM P/E ~37x at 5-year low (3-year avg 60.7x, 10-year median 52.8x). On forward CY2027E at 18.5x, NVDA is cheaper than semi-equipment companies (AMAT 20x, LRCX 22x) despite growing 65%+ YoY and having a $1T order book. The market is pricing 'peak GPU cycle' when GTC just demonstrated 'beginning of platform cycle.'
Risk / Reward
Favorable
Bull $336 = 82% upside (35% probability). Base $250 = 35% upside (50% probability). Bear $150 = 19% downside (15% probability). Probability-weighted expected return: (0.35 × 82%) + (0.5 × 35%) + (0.15 × -19%) = 28.7% + 17.5% - 2.9% = ~44%. Risk-reward improved from ~40% pre-GTC due to bull probability increase (30→35%) and bear decrease (20→15%). The $1T order book provides downside protection that did not exist before.
Thesis Invalidation & Monitoring
Bear Triggers
Re-evaluate if any of these occur
- AMD MI450 demonstrates training parity at OpenAI/Meta at >1GW scale: OpenAI moves >25% of training to AMD by Q4 2026
- Q1 FY2027 revenue misses $78B guidance or DC YoY growth decelerates to <40%
- Vera Rubin NVL72 ramp delayed beyond H2 2026 due to CoWoS-L yield issues or HBM4 supply shortage
- Two or more hyperscalers announce AI capex cuts >15% in Q2/Q3 2026 — order cancellations from $1T book
- Normalized gross margin falls below 72% for two consecutive quarters
- Groq 3 LPU adoption disappoints — <$1B revenue in first 4 quarters, signaling failed integration
Bull Confirmation
Add to position if these materialize
- GTC 2026 Vera Rubin platform reveal with $1T order book ✅ CONFIRMED (March 17, 2026)
- Q1 FY2027 revenue beats $78B by 5%+ and Q2 guided above $85B — sustains 10%+ sequential growth
- Vera Rubin NVL72 achieves GA at 3+ cloud providers by Q3 2026 — validates H2 ramp timeline
- Networking revenue sustains >$8B/quarter through FY2027, confirming platform trajectory
- Uber DRIVE Hyperion L4 deployment begins on schedule in H1 2027 — validates automotive revenue stream
- Sovereign AI + enterprise orders total >$30B committed through FY2028
Key Metrics to Watch
Track every quarter
- DC revenue sequential growth rate (must sustain >8% QoQ for base case; <5% = deceleration signal)
- Vera Rubin ramp timeline (MSFT/AWS/GCP availability dates vs H2 2026 commitment)
- Networking revenue as % of DC revenue (rising = platform moat; falling = GPU commoditization)
- Normalized gross margin trajectory (Q4 FY2026: 75.0%; must hold >74% through FY2027)
- Groq 3 LPU revenue contribution (new metric — first reported Q1 FY2027 or later)
- $1T order book conversion rate — track quarterly revenue vs implied backlog drawdown
- AMD MI450 deployment scale at OpenAI/Meta (H2 2026 is the critical window)
- Custom ASIC share of AI accelerator TAM (Goldman est. 35% by Q4 2026)
Executive Summary
▸ Thesis Change History
Thesis Change Log
Scenario prices completely rebuilt with correct EPS chain. Old prices ($185/$145/$90) were anchored to ~$4.5 FY2027E EPS (implied, never explicitly stored) — severely understated. New methodology: CY2027E non-GAAP EPS × forward P/E = 12-month target. Base $250 = $10 × 25x matches sellside most conservative (湖水 Feb 2026 reference). FY2026 actual revenue $215.9B (Q4 $68.1B, gross margin 75%) exceeded all three old scenario assumptions. Quarterly financials also corrected: prior file had dates shifted +1 year and was missing all 4 FY2026 quarters.
Moved from bull to base after Q4 FY2026 earnings. FY2026 revenue $215.9B beat strongly but Q1 FY2027 guidance of $78B (+Q1 H20 write-down impact) creates near-term uncertainty. Gross margin guidance 75% for Q1 FY2027 is solid — Blackwell no longer a margin headwind.
▸ Monitoring Signals
Signal Intelligence
Taiwan Stock Exchange monthly disclosure
TrendForce / DRAMeXchange / industry checks
DigiTimes / TrendForce / TSMC quarterly commentary
MSFT/GOOG/AMZN/META earnings calls
$710B+ collective CSP capex guided for 2026(2026-02-26)
ASML earnings reports
AWS/Azure/GCP spot pricing APIs, CloudOptimizer, Vast.ai, Lambda Cloud
Taiwan Ministry of Finance monthly trade statistics
Korea Customs Service (first 10 business days flash estimate)
SEMI.org monthly report
SMCI, Quanta, Wistron/Wiwynn, Foxconn, Dell, HPE earnings/commentary
Grey market, resellers, Vast.ai, CoreWeave secondary, industry contacts
BIS Federal Register notices, Commerce Dept speeches, Congressional hearings, diplomatic cables
Company announcements, Epoch AI compute tracker, ML papers
NVIDIA earnings, Arista/Broadcom commentary
VRT/GEV/ETN earnings, utility interconnection queues, CBRE data center reports
DRAMeXchange / TrendForce
Goldman Sachs / Counterpoint / SemiAnalysis
~20% (est. 35% by Q4 2026)(2026-01-15)
NVIDIA earnings
$35.6B Q4 FY2026 (+68% YoY); FY2026 total $193.7B(2026-02-26)
NVIDIA earnings
$8.2B Q3 FY2026 (+162% YoY)(2026-02-26)
NVIDIA earnings
75.0% Q4 FY2026 (normalized post H20 write-down)(2026-02-26)
Mercury Research / SemiAnalysis
~85% data center AI accelerator share(2025-12-01)
NVIDIA earnings
$18.5B FY2026 (8.6% of revenue, down from 9.2%)(2026-02-26)
NVIDIA earnings
$34.9B Q4 FY2026; $96.6B FY2026 full year(2026-02-26)
AMD earnings, hyperscaler commentary
▸ Red Team Analysis
Post-GTC 2026 thesis update is comprehensive and well-sourced. The upgrade from buy to strong-buy is justified by the $1T order book (verified by CNBC, Wells Fargo, Goldman), Vera Rubin 7-chip platform in production (NVIDIA official, Tom's Hardware, VideoCardz), and Samsung foundry diversification (Dataconomy, SamMobile, TrendForce). Scenario math passes all checks. Key qualifications: (1) '$1T order book' should distinguish committed POs from demand projections per Moor Insights caveat; (2) Groq acquisition date confirmed Dec 24, 2025 (GPT-5.4 verified); (3) switching-cost moat still rated 'strong' despite 48% revenue-weighted erosion (unchanged from prior review's challenge). Overall: endorsed with minor factual qualifications.
The reframing from 'dominant AI GPU supplier' to 'sole vertically integrated AI infrastructure platform' is empirically supported by GTC 2026 announcements.
GTC 2026 revealed 7 chip types (Rubin GPU, Vera CPU, NVLink 6 Switch, ConnectX-9, BlueField-4, Spectrum-6, Groq 3 LPU) across 5 rack configurations — confirmed by NVIDIA's official developer blog ('Seven Chips, Five Rack-Scale Systems, One AI Supercomputer'). No competitor ships a comparable integrated system. AMD has GPU + CPU but lacks LPU, DPU, NIC, and switch ASICs. The 'sole vertically integrated AI infrastructure platform' claim is defensible. Minor note: NVIDIA's own platform blog counts '6 new chips' (excluding Groq 3 as acquired IP), while the POD blog counts 7 total. Both are technically correct.
Switching-cost moat still rated 'strong' despite prior review challenge. Revenue-weighted erosion concern remains valid — 48% of DC revenue from customers actively diversifying.
The prior red team review (2026-03-04) challenged the switching-cost 'strong' rating because 48% of DC revenue comes from customers building escape routes (Google TPU, Amazon Trainium, MSFT Maia, AMD-OpenAI, AMD-Meta). This update did not change the switching-cost moat entry — it remains exactly as before with 'strong' rating and the same text. The new moat layers (systems integration, Groq 3 inference) are legitimate additions, but they don't address the switching-cost bifurcation. Recommendation: either downgrade switching-cost to 'moderate' on a revenue-weighted basis, or add explicit language that 'strong' applies to the enterprise segment (52% of DC revenue) while hyperscaler switching costs are 'moderate.'
Evidence
Scenario math passes all checks. Probability shift (bull 30→35%, bear 20→15%) is modest and well-justified by $1T order book.
Bull: $12.0 × 28x = $336.0 ✓. Base: $10.0 × 25x = $250.0 ✓. Bear: $7.5 × 20x = $150.0 ✓. Probabilities sum to 100% (0.35 + 0.50 + 0.15 = 1.00). PW return: (0.35 × 82%) + (0.50 × 35%) + (0.15 × -19%) = 28.7% + 17.5% - 2.85% = 43.35% ≈ ~44% ✓. The probability shift is conservative — $1T order book arguably justifies a larger bull probability increase. Price targets unchanged despite GTC; this is disciplined (avoids post-event euphoria bias). Base case upper bound raised from $380B to $400B, supported by order backlog visibility.
Evidence
GTC catalyst correctly marked as confirmed. New catalysts (Vera Rubin cloud GA, CoWoS-L expansion, AMD MI450 deployment) are well-chosen with appropriate timelines.
The catalyst list is now comprehensive: 1 confirmed (GTC), 2 positive pending (Vera Rubin GA H2 2026, CoWoS expansion Q2 2026), 1 uncertain (Q1 FY2027 earnings May 28), 1 negative (export controls Jun 2026), 1 negative (AMD MI450 Jul 2026). The AMD MI450 inclusion as a negative catalyst addresses the prior review's criticism about omitting it. Timeline for Vera Rubin cloud GA (H2 2026) is consistent with GTC announcements (MSFT already running NVL72, AWS deploying 1M+ GPUs).
Evidence
Risk coverage is thorough and improved. New Groq 3 execution risk appropriately added. Technology risk correctly updated for CoWoS-L 4x reticle complexity.
Eight risks spanning concentration, competitive (3), technology, regulatory, execution, and macro. The addition of Groq 3 execution risk ($20B goodwill impairment) demonstrates intellectual honesty — most post-GTC analyses only discuss Groq 3 positively. The CoWoS-L 4x reticle risk is technically sound: 12 HBM4 stacks per Rubin GPU is indeed unprecedented packaging complexity. The mitigants are credible (Samsung diversification, Blackwell bridge product). One gap: no explicit risk for Vera CPU competitive reception against Intel Xeon — Jensen is directly attacking Intel's $60B+ server CPU market, which could invite a competitive response.
Evidence
Buy → strong-buy upgrade is directionally justified but warrants qualification: the $1T figure may include demand projections, not just committed POs.
The upgrade from buy to strong-buy is driven by three factors: (1) $1T order book doubling demand visibility, (2) 7-chip platform strengthening moat, (3) Samsung foundry diversification. Factors 2 and 3 are unambiguous positives. Factor 1 requires qualification: Moor Insights & Strategy CEO Patrick Moorhead noted that '$1 trillion is more of a demand projection than a firm revenue forecast.' The thesis states '$1T is committed purchase orders, not expressions of interest' (in the macro risk mitigant) — this may be stronger language than the evidence supports. Jensen said 'purchase orders' but the composition between firm contracts and projected demand is unclear. Recommendation: qualify the $1T language as 'purchase orders and demand projections' until NVIDIA provides backlog detail in quarterly filings.
Updated variant view — 'platform revenue vs GPU share' framework — is empirically strengthened by GTC 2026 seven-chip reveal.
The pre-GTC thesis argued the market misprices NVDA by using 'GPU share × GPU ASP' as the valuation framework. GTC 2026 made this argument verifiable: NVDA now ships 7 chip types across 5 rack configurations, with revenue streams spanning compute (Rubin GPU), inference (Groq 3), networking (NVLink 6/Spectrum-6), CPU (Vera), and storage/security (BlueField-4). The variant perception is no longer just analytical — it is now an observable product reality. The key divergence on supply-as-binding-constraint ($1T demand vs CoWoS/HBM4 supply) is well-articulated and supported by TSMC capacity data.
Evidence
Moat trajectory 'compounding' is now better supported with additional layers (systems integration, Groq 3 inference, Nemotron Coalition).
The prior review challenged 'compounding' as overstating given AMD 12GW deals. The updated thesis addresses this by adding two new moat layers: (1) systems-level integration (7-chip platform that AMD cannot replicate), (2) Groq 3 dedicated inference hardware that directly counters the ASIC threat. These additions are substantive, not cosmetic — NVDA genuinely expanded its moat surface area at GTC 2026. The 'compounding' trajectory is now more defensible because the moat is widening across more dimensions (networking + systems + inference + software) even as the compute-layer moat faces erosion. The 5-year durability horizon is reasonable given Feynman roadmap visibility (TSMC A16).
Evidence
Valuation math is accurate. Post-GTC analyst upgrades support the base case target of $250.
~18.5x CY2027E at ~$185 checks out ($185/$10 = 18.5x). Consensus PT $263-274 is above the thesis base case $250, meaning the thesis is actually more conservative than the street post-GTC — this is appropriate discipline. Jeff Pu at $295, Dan Ives bullish, 38 Strong Buys, Goldman maintaining Buy — all directionally support the strong-buy upgrade. PW expected return of ~44% is correctly calculated. The comparison to AMAT 20x and LRCX 22x remains valid and is even more compelling post-GTC given the platform narrative.
Evidence
Cross-Company References
| Ticker | Field | Comparison |
|---|---|---|
| TSM | CoWoS-L capacity | Vera Rubin requires 4x reticle CoWoS-L — far more complex than Blackwell. TSMC expanding to 130-150K wafers/month by end 2026. NVDA secures >60% of TSMC CoWoS output. |
| TSM | Process node | Rubin GPU on TSMC N3P (full node shrink from Blackwell 4NP). Feynman previewed on TSMC A16 (1.6nm). NVDA is TSMC's largest AI customer. |
| AMD | Competitive threat | AMD has 12GW committed (OpenAI + Meta) but only ships GPU + CPU. Cannot match NVDA's 7-chip platform or NVLink 6 at 260 TB/s. Systems-level moat gap widened at GTC 2026. |
| Samsung | Foundry diversification | Samsung 4nm is sole fab for Groq 3 LPU — first NVIDIA DC chip not on TSMC. Also HBM4 supplier. Meaningful foundry diversification that reduces single-source risk. |
| MU | HBM4 supply | Rubin GPU requires 12 HBM4 stacks (288GB) per GPU. Micron confirmed as Vera Rubin HBM4 supplier alongside SK Hynix and Samsung. Massive HBM4 volume driver. |
Revenue grew 19.5% QoQ to $68.1B.
Revenue grew 19.5% QoQ to $68.1B, above the 7Q average of 14.8%.
Diluted EPS grew 35.4% QoQ to 1.8%.
Diluted EPS rose 35.4% QoQ to $1.76.
Research Coverage
100%
Research Coverage
Profile
Company description, market, key metrics
Complete
Financials
8Q / 2Y
Complete
Thesis
3 scenarios
Complete
Relationships
10 tracked links
Complete
Notes
6 entries
Complete
Trend Dashboard
Quarterly Trends
Revenue Growth
Margin Delta (bp QoQ)
Memory Market Context
DRAM spot as HBM demand proxy; NAND wafer for AI storage infrastructure context
DRAM Spot
NAND Wafer
▸ Sparklines + Momentum Strips
Quarterly Trends
Financial Momentum (8Q)
Revenue
$68.1B
Gross Margin %
75.0%
Op Margin %
65.0%
Free Cash Flow
$34.9B
Dependency Map
Supplier Criticality Matrix
Use this map to identify which upstream earnings may change your view first.
View detailed table
| Ticker | Product | Criticality | Spend |
|---|---|---|---|
| SKHynix | HBM4 memory for Vera Rubin (288GB/GPU, 22 TB/s BW, 12 stacks per GPU via CoWoS-L) | primary | $10.0KM |
| MU | HBM4 memory for Vera Rubin (secondary supplier ramping) | secondary | $5.0KM |
| TSM | Wafer fabrication (N3P for Rubin GPU/Vera CPU, 4NP for Blackwell) + CoWoS-L 4x reticle packaging | sole-source | $22.0KM |
| AMKR | Advanced packaging (2.5D CoWoS alternative) | secondary | - |
| Samsung | Groq 3 LPU fabrication (Samsung 4nm) + HBM4 memory (tertiary supplier) | secondary | $3.0KM |
Customer Concentration
Top customers by reported/estimated exposure.
Workbench
Financial Workbench
Financial Summary
| Metric | Q4 FY2026 | Q3 FY2026 | Q2 FY2026 | Q1 FY2026 | Q4 FY2025 | Q3 FY2025 | Q2 FY2025 | Q1 FY2025 |
|---|---|---|---|---|---|---|---|---|
| Revenue Metrics | ||||||||
| Revenue | $68,127 | $57,006 | $46,743 | $44,062 | $39,331 | $35,082 | $30,040 | $26,044 |
| Gross Profit | $51,093 | $41,849 | $33,853 | $26,668 | $28,723 | $26,156 | $22,574 | $20,406 |
| Gross Margin % | 75.0% | 73.4% | 72.4% | 60.5% | 73.0% | 74.6% | 75.1% | 78.4% |
| Profitability | ||||||||
| Operating Income | $44,299 | $36,010 | $28,440 | $21,638 | $24,034 | $21,869 | $18,642 | $16,909 |
| Op Margin % | 65.0% | 63.2% | 60.8% | 49.1% | 61.1% | 62.3% | 62.1% | 64.9% |
| Net Income | $42,960 | $31,910 | $26,422 | $18,775 | $22,091 | $19,309 | $16,599 | $14,881 |
| Net Margin % | 63.1% | 56.0% | 56.5% | 42.6% | 56.2% | 55.0% | 55.3% | 57.1% |
| EPS | $1.76 | $1.30 | $1.08 | $0.76 | $0.89 | $0.78 | $0.67 | $0.60 |
| EBITDA | — | — | — | — | — | — | — | — |
| Cash Flow | ||||||||
| Operating Cash Flow | $36,190 | $23,751 | $15,365 | $27,414 | $16,628 | $17,627 | $14,488 | $15,345 |
| Capex | $1,284 | $1,636 | $1,895 | $1,227 | $1,077 | $813 | $977 | $369 |
| Free Cash Flow | $34,906 | $22,115 | $13,470 | $26,187 | $15,551 | $16,814 | $13,511 | $14,976 |
| Shares (M) | 24.4B | 24.5B | 24.5B | 24.6B | 24.7B | 24.8B | 24.8B | 24.9B |
▸ Relationship Network
Coverage Gap Notice
In edges only: AMKR, META
In relationships only: AMD, UBER
Relationship Network
Force-directed network. Edge color = relationship type, edge width = significance.
▸ Relationship Ledger
Relationship Ledger
| Target | Type | Significance | Description | Source |
|---|---|---|---|---|
TSM TSMC | supplier | critical | Sole leading-edge foundry partner. Manufactures all NVIDIA GPUs on 4N/5nm (H100) and 4NP (B200) process nodes. CoWoS advanced packaging is the primary supply bottleneck. Contracts / Investments / HistoryContracts: 2 - Long-term wafer supply agreement for leading-edge nodes (4N, 4NP, N3E) (active) - CoWoS-L packaging capacity reservation for B200/GB200 (active) Investments: 0 History: 2 - 2024-01-01: TSMC began 4NP mass production for B200 - 2025-06-01: CoWoS capacity doubled at TSMC | FY2026 10-K |
SKHynix SKHynix | supplier | critical | Primary HBM3E memory supplier with 50%+ share of NVIDIA's HBM allocation. Critical for B200 (8x HBM3E stacks per GPU, 192GB). Contracts / Investments / HistoryContracts: 1 - HBM3E supply agreement with volume guarantees (active) Investments: 0 History: 2 - 2024-03-01: SKHynix qualified HBM3E for H100/H200 - 2025-09-01: SKHynix began HBM4 samples for next-gen GPU | Q4 FY2025 earnings-call |
MSFT Microsoft | customer | critical | Largest single customer (~15% of revenue). Azure is the primary cloud platform for NVIDIA GPU instances. Deep partnership on AI infrastructure. Contracts / Investments / HistoryContracts: 1 - Multi-year GPU supply agreement for Azure AI infrastructure (active) Investments: 0 History: 1 - 2023-01-01: Microsoft committed $10B+ to OpenAI, driving massive GPU demand | FY2026 est. |
AMD AMD | competitor | major | Primary GPU competitor. 12GW total committed (OpenAI 6GW + Meta 6GW, H2 2026). MI450 + Helios rack architecture. However, AMD has GPU + CPU only — cannot match NVDA's 7-chip platform (no LPU, no DPU, no switch ASIC, no photonics). NVLink 6 at 260 TB/s is 2+ generations ahead of AMD interconnect. Contracts / Investments / HistoryContracts: 0 Investments: 0 History: 3 - 2024-12-01: AMD launched MI300X, first credible AI GPU competitor - 2025-10-06: AMD-OpenAI 6GW partnership announced (MI450, H2 2026) - 2026-02-24: AMD-Meta 6GW partnership announced (custom MI450, Helios rack, H2 2026) | 2026 report |
AMZN Amazon (AWS) | customer | critical | Second largest customer (~13% of revenue). AWS P5/P6 GPU instances powered by NVIDIA. Also developing Trainium custom ASIC as potential substitute. Contracts / Investments / HistoryContracts: 0 Investments: 0 History: 2 - 2025-01-01: AWS launched P6 instances with B200 GPUs - 2025-09-01: Amazon Trainium3 announced | FY2026 est. |
GOOG Alphabet (GCP) | customer | major | ~10% of revenue via GCP A3/A3+ GPU instances. Google also develops TPU custom ASICs, making them both customer and competitive threat. Contracts / Investments / HistoryContracts: 0 Investments: 0 History: 1 - 2025-04-01: Google announced TPUv6 with competitive training performance | FY2026 est. |
MU Micron Technology | supplier | major | Secondary HBM3E supplier qualifying for B200+. Ramping HBM3E production to diversify NVIDIA's memory supply away from SKHynix concentration. Contracts / Investments / HistoryContracts: 1 - HBM3E qualification and volume ramp agreement (active) Investments: 0 History: 0 | FY2026 est. |
SMCI Super Micro Computer | customer | major | Key GPU server integrator. Designs and assembles AI server racks using NVIDIA GPUs + NVLink. ~3% of NVIDIA revenue. Vera Rubin-based systems shipping H2 2026. Contracts / Investments / HistoryContracts: 0 Investments: 0 History: 0 | FY2026 est. |
UBER Uber Technologies | customer | major | GTC 2026 partnership: NVIDIA + Uber scaling world's largest Level 4 robotaxi network. DRIVE AGX Hyperion 10 + DRIVE AV software + Alpamayo reasoning model. 28 cities, 4 continents by 2028, targeting 100K vehicles. Contracts / Investments / HistoryContracts: 1 - DRIVE Hyperion L4 autonomous driving platform deployment for Uber robotaxi fleet (active) Investments: 0 History: 1 - 2026-03-17: NVIDIA-Uber robotaxi partnership announced at GTC 2026 | GTC 2026 investor-pres |
Samsung Samsung Electronics | supplier | major | Sole foundry for Groq 3 LPU on Samsung 4nm — first NVIDIA data center chip not fabricated by TSMC. Also tertiary HBM4 supplier for Vera Rubin GPU. Meaningful foundry diversification. Contracts / Investments / HistoryContracts: 2 - Samsung 4nm wafer fabrication for Groq 3 / LP30 LPU chips (active) - HBM4 memory supply for Vera Rubin (tertiary behind SK Hynix and Micron) (active) Investments: 0 History: 2 - 2026-02-01: Samsung began mass production of HBM4 for Vera Rubin - 2026-03-17: Samsung confirmed as sole fab for Groq 3 LPU at GTC 2026 | GTC 2026 investor-pres |
▸ Research Notes (6)
Research Notes Timeline
GTC 2026: Vera Rubin Platform — 7 Chips, $1T Order Book, Agentic AI Era
NVIDIA GTC 2026 keynote (March 17). Jensen Huang unveiled the Vera Rubin platform — the most comprehensive product launch in NVIDIA history. **Vera Rubin Platform (7 chips, all in full production, shipping H2 2026):** - Rubin GPU: 336B transistors (dual-die), TSMC N3P, 288GB HBM4, 22 TB/s bandwidth, 50 PFLOPS FP4 - Vera CPU: 88 custom Arm Olympus cores, 1.5 TB LPDDR5X, first CPU with FP8 — directly targeting Intel Xeon - NVLink 6 Switch: 3.6 TB/s per GPU, 260 TB/s rack bandwidth - ConnectX-9 SuperNIC + BlueField-4 DPU + Spectrum-6 Ethernet Switch - Groq 3 LPU: 500MB on-chip SRAM, 150 TB/s SRAM bandwidth, Samsung 4nm — dedicated decode/inference **Five rack configurations:** NVL72 (training), Groq 3 LPX (decode), Vera CPU Rack (RL/orchestration), BlueField-4 STX (KV-cache), Spectrum-6 SPX (networking). Full POD: 40 racks, 1,152 Rubin GPUs, ~20,000 NVIDIA dies, 60 exaflops. **Performance claims:** 10x perf/watt vs Grace Blackwell. 35x inference throughput/MW (Vera Rubin + Groq combined). Token cost reduction up to 10x. **$1 trillion order book through 2027** — doubled from $500B at GTC 2025. Directly dispels AI capex peak concerns. **Cloud partners confirmed:** Microsoft Azure (first NVL72 running), AWS (1M+ NVIDIA GPUs deployed), Google Cloud, Oracle Cloud, CoreWeave, Vultr. OEMs: Dell, HPE, Lenovo, Supermicro. **Feynman architecture preview (next-gen):** TSMC A16 (1.6nm), Rosa CPU, LP40 LPU, Kyber interconnect with co-packaged optics. **Autonomous driving:** NVIDIA + Uber robotaxi partnership — 28 cities, 4 continents by 2028, 100K vehicles. BYD, Hyundai/Kia, Nissan, Geely, Isuzu adopting DRIVE Hyperion for Level 4. **Agentic AI software:** NemoClaw/OpenClaw framework, Nemotron Coalition (Mistral, Perplexity, Cursor, LangChain, Black Forest Labs — "billions" committed). Six frontier model families. **DLSS 5:** AI-powered neural rendering for gaming, Fall 2026 driver update. **Supply chain implications:** - TSMC N3P (full node shrink) + 4x reticle CoWoS-L with 12 HBM4 stacks = massive packaging demand - Samsung 4nm for Groq 3 = meaningful foundry diversification away from TSMC sole-source - HBM4 demand from SK Hynix, Samsung, Micron all confirmed - Every Vera Rubin POD = ~20,000 dies requiring advanced packaging - Feynman on TSMC A16 gives 2-generation roadmap visibility **Thesis impact:** Strongly bullish. $1T order book validates bull case revenue trajectory. 7-chip systems-level platform creates a moat AMD cannot replicate (they have GPU only). Groq 3 LPU addresses inference commoditization risk directly. Uber/automotive opens structural growth beyond data center. Samsung diversification partially mitigates TSMC sole-source risk.
Deep Research: NVDA — AI Platform Monopoly at a 5-Year P/E Low
Completed deep research on NVIDIA Corporation. **Business & Revenue:** Dominant AI GPU platform with 85%+ data center AI accelerator share. FY2026 revenue $215.9B (+65% YoY). Data Center $193.7B (90% of revenue, +68% YoY). Fabless model through TSMC; $96.6B FCF (44.7% margin). Gaming $16B (7%), ProViz $3.2B, Auto $2.3B. **Financial Trajectory:** Q4 FY2026 revenue $68.1B with 75.0% GM (normalized post-Q1 H20 write-down). Q1 FY2027 guided $78B — annualized run-rate >$312B. FY2026 non-GAAP EPS ~$4.97 (implied from quarterly data). R&D $18.5B (8.6% of revenue, declining intensity). FCF $96.6B vs NI $120.1B — gap is working capital investment for Blackwell ramp, not accrual issues. **Core Thesis:** Buy at $181. At 18.1x CY2027E non-GAAP EPS ($10), NVDA is at its 5-year P/E low (3-year avg 60.7x, 10-year median 52.8x) despite guiding Q1 at $78B with 75% GM. The market has pre-emptively de-rated for ASIC disruption and growth deceleration that have not yet materialized in quarterly results. Probability-weighted expected return ~40%. **Moat Assessment:** Three-layered moat with different trajectories: (1) CUDA ecosystem — stable; 4M+ developers, dominant for 80% of market but hyperscalers building escape routes; (2) NVLink/NVSwitch interconnect IP — compounding; networking revenue $8.2B/quarter (+162% YoY), defines rack-scale architecture even ASIC clusters depend on; (3) Scale economics — stable; $18.5B R&D amortized across massive volume. Overall trajectory: compounding, because networking/platform layer grows faster than compute erosion. Biggest threat: AMD-OpenAI 6GW deal (H2 2026) — if OpenAI moves >25% of training to AMD, it validates CUDA switching is surmountable. **Valuation:** Stock $181 (March 2026). 18.1x CY2027E P/E ($10 non-GAAP EPS). Cheaper than semi-equipment companies (AMAT 20x, LRCX 22x) despite 65% YoY revenue growth. TTM GAAP P/E 36.7x at 5-year low. Base target $250 (+38%). PW expected return ~40%. **Scenarios:** Bull $336 (30%, CY2027E $12 EPS × 28x — FY2027 rev >$420B, Rubin step-change demand). Base $250 (50%, $10 × 25x — FY2027 $350-380B, hyperscaler capex sustains). Bear $150 (20%, $7.5 × 20x — capex cuts >20%, ASIC share accelerates to 20%+). **Key Catalysts:** - GTC 2026 (Mar 17): Blackwell Ultra/Vera Rubin reveal — potential 2x inference performance leap - Q1 FY2027 earnings (May 28): First full Blackwell quarter, DC growth trajectory - CoWoS expansion (Q2 2026): TSMC 2x capacity eases primary supply bottleneck - China export control review (Jun 2026): Could restrict H20 (~$10B annual revenue) **Key Risks:** - AMD-OpenAI 6GW MI450 partnership (H2 2026): Validates CUDA switching path at most demanding workloads — single most important bear signal - Hyperscaler custom ASIC adoption: Google TPU, Amazon Trainium, MSFT Maia — Goldman est. 35% inference share by Q4 2026 - Customer concentration: Top 4 hyperscalers = ~48% of DC revenue; any single capex cut has outsized impact - CoWoS sole-source: TSMC packaging bottleneck; any disruption halts GPU production **Supply Chain:** Central node in AI semiconductor ecosystem. Sole-source TSMC for CoWoS-L packaging (binding constraint). SKHynix (~50% share) and Micron (~20%) for HBM3E/HBM4. Samsung tertiary HBM. Downstream: MSFT (15%), AMZN (13%), GOOG (10%), META (10%) of DC revenue. NVLink fabric is architecture-agnostic — even ASIC-based clusters depend on NVDA networking.
FY2026 Annual Earnings: $215.9B Revenue, Blackwell Full Ramp
FY2026 (ended Jan 2026) total revenue $215.9B, up 65% YoY. Data Center $193.7B (+68%), Gaming $16.0B (+41%), ProViz $3.2B (+70%), Auto $2.3B (+39%). Full-year GAAP gross margin 71.1% depressed by Q1 $4.5B H20 export-control inventory write-down; normalized ~75%. Q4 FY2026 alone was $68.1B revenue with 75.0% GM, demonstrating margin normalization post write-down. Q1 FY2027 guided $78B revenue. Blackwell/GB200 fully ramping and supply-constrained through FY2027. CoWoS packaging capacity expanding 2x in CY2026 but remains the binding constraint.
FY2025 Annual Earnings: DC Revenue $115B, B200 Ramp Begins
FY2025 (ended Jan 2025) total revenue $130.5B, up 114% YoY. Data Center $115.2B (+142% YoY from $47.5B in FY2024). B200/GB200 first shipments in Q4 FY2025, expected to be supply-constrained through FY2026. CoWoS packaging remains the primary bottleneck. Gross margin 75.1%. R&D rose to $12.0B as CUDA and Blackwell software investment scaled.
Blackwell Architecture Deep Dive
B200 uses TSMC 4NP process with 208B transistors. Dual-die design connected via 10 TB/s NV-HBI. Each B200 requires 8x HBM3E stacks (192GB). CoWoS-L packaging from TSMC is sole-source. Inference performance 30x vs H100 for LLMs.
Customer Concentration Risk
Top 4 hyperscalers (MSFT, AMZN, GOOG, META) represent ~48% of Data Center revenue. CoreWeave and other GPU cloud providers adding ~10%. Sovereign AI customers (Middle East, SE Asia) are a growing diversification vector but remain <5% of DC.
Source Mix
▸ Data Quality
Data Quality
Freshness
Profile update: Fresh (6d)
Financial update: Fresh (26d)
Completeness
- [ok] Profile - Company description, market, key metrics
- [ok] Financials - 8Q / 2Y
- [ok] Thesis - 3 scenarios
- [ok] Relationships - 10 tracked links
- [ok] Notes - 6 entries
Consistency Checks
- [warn] In edges only: AMKR, META
- [warn] In relationships only: AMD, UBER
- [ok] Thesis has substantive content.
- [ok] No stale pending catalysts detected.
EDGAR Validation
CLEAN (19d ago)
10 periods, 150/150 fields matched
Source: SEC EDGAR XBRL (CIK 0001045810)