MatX Raises $500M for LLM-Optimized Chips: The Race to Dethrone NVIDIA Heats Up
MatX just raised $500M in Series B funding to build chips specifically designed for large language models. Led by Jane Street and Situational Awareness LP, this is the largest AI chip funding round of 2026—and a direct challenge to NVIDIA's dominance in AI infrastructure.

MatX just closed a $500 million Series B funding round led by Jane Street and Situational Awareness LP. The company is building the MatX One, an LLM-optimized accelerator chip designed specifically for pre-training, reinforcement learning, and inference. This is the largest AI chip funding announcement of 2026, and it signals a major shift in how the industry thinks about AI compute.
NVIDIA has owned the AI hardware market for years—H100s and H200s power virtually every major LLM training run. But NVIDIA's chips were originally designed for graphics and gaming, then adapted for AI. MatX is taking a different approach: building chips from the ground up specifically for the unique computational patterns of transformer models.
What Makes LLM-Optimized Chips Different?
Training and running large language models involves massive matrix multiplications, attention mechanisms, and memory bandwidth bottlenecks. NVIDIA's GPUs are incredibly powerful, but they're general-purpose. MatX claims the MatX One is purpose-built for these specific workloads.
s.com/ai-agents-plus.firebasestorage.app/blog-images/matx-500m-llm-chip-funding-inline.png)
The company hasn't disclosed detailed technical specs yet, but the pitch is clear: better performance per watt, lower latency for inference, and optimized memory hierarchies for the specific patterns in LLM workloads. If they deliver, it could mean significantly lower costs for companies running AI at scale.
The Competitive Landscape is Crowding Fast
MatX isn't alone in challenging NVIDIA. In just the last few weeks:
- SambaNova raised $350M for dataflow accelerators (read our coverage)
- Axelera secured $250M for edge AI chips (see our analysis)
- Taalas announced $169M for model-specific chips that outperform H200s at a fraction of the power
This funding wave isn't coincidental. AI infrastructure is now a $100B+ market, and the economics matter. If a startup can deliver 2x better performance per dollar—or even just match NVIDIA at 30% lower cost—they can capture meaningful market share.
Why Jane Street and Situational Awareness LP are Betting Big
Jane Street is a quantitative trading firm known for deep technical bets. Situational Awareness LP is a newer fund focused specifically on AI infrastructure and safety. Both have direct exposure to AI compute costs—they're not just investing in AI; they're using it at scale.
That insider perspective matters. When your portfolio companies or trading systems are spending millions on compute, you understand the pain points viscerally. MatX's $500M round isn't just capital—it's validation from sophisticated users who know exactly what they need.
What This Means For Your Business
If you're running AI workloads in production, the chip competition is good news:
- Lower inference costs: As alternatives emerge, pricing pressure increases. Even if you stick with NVIDIA, you'll likely pay less.
- Specialized performance: Purpose-built chips could unlock new use cases that were previously too expensive to run at scale.
- Vendor optionality: Relying on a single hardware vendor is risky. More competition means more negotiating leverage.
The caveat: MatX is still pre-production. The chip won't ship until late 2026 at the earliest. Early adopters will get access first, likely through cloud providers or direct partnerships.
The Bigger Picture: AI Infrastructure is Maturing
Five years ago, AI hardware meant NVIDIA or nothing. Today, we're seeing:
- Specialized chips for inference vs training
- Model-specific optimizations (e.g., chips tuned for specific transformer architectures)
- Regional plays (Europe's Axelera, Asia's various efforts)
- Vertical integration (cloud providers building their own silicon)
MatX's $500M raise is part of that maturation. The AI software layer is moving fast, but the hardware layer is finally catching up. For businesses, that means more options, better economics, and—eventually—more accessible AI at scale.
Looking Ahead
Watch for MatX's technical disclosures over the next few months. If they can demonstrate real performance advantages in benchmarks that matter (not just synthetic tests), this could reshape AI infrastructure decisions for major labs and enterprises.
The NVIDIA moat is real, but it's no longer impenetrable. And for the rest of us building and deploying AI systems, that's unambiguously good news.
Build AI That Works For Your Business
At AI Agents Plus, we help companies move from AI experiments to production systems that deliver real ROI. Whether you need:
- Custom AI Agents — Autonomous systems that handle complex workflows, from customer service to operations
- Rapid AI Prototyping — Go from idea to working demo in days using vibe coding and modern AI frameworks
- Voice AI Solutions — Natural conversational interfaces for your products and services
We've built AI systems for startups and enterprises across Africa and beyond.
Ready to explore what AI can do for your business? Let's talk →
About AI Agents Plus Editorial
AI automation expert and thought leader in business transformation through artificial intelligence.



