Semiconductor startup Positron has raised $230 million in Series B funding, TechCrunch has exclusively learned, as global investors intensify efforts to build alternatives to Nvidia’s dominant AI hardware.
The Reno-based company plans to use the fresh capital to accelerate deployment of its high-speed memory chips, a key component for processors used in artificial intelligence workloads, according to sources familiar with the deal.
Investors in the round include Qatar Investment Authority (QIA), the country’s sovereign wealth fund, which has been increasingly focused on expanding AI infrastructure capabilities, the sources said.
Positron’s Series B comes at a time when hyperscalers and AI companies are actively seeking to reduce reliance on Nvidia, the long-time leader in AI computing hardware. These efforts include OpenAI, one of Nvidia’s largest customers, which has reportedly expressed dissatisfaction with aspects of Nvidia’s latest AI chips and has been exploring alternatives since last year.
Qatar’s participation through QIA aligns with a broader national push into so-called sovereign AI infrastructure, a strategy repeatedly highlighted this week at Web Summit Qatar in Doha. Several sources told TechCrunch the country views compute capacity as essential to maintaining global economic competitiveness and aims to position itself as a leading AI services hub in the Middle East.
That strategy is already materializing through large-scale commitments, including a $20 billion AI infrastructure joint venture with Brookfield Asset Management announced in September.
With the latest round, Positron has now raised just over $300 million since its founding three years ago. The startup previously secured $75 million last year from investors including Valor Equity Partners, Atreides Management, DFJ Growth, Flume Ventures, and Resilience Reserve.
Positron says its first-generation chip, Atlas, manufactured in Arizona, can deliver performance comparable to Nvidia’s H100 GPUs while consuming less than one-third of the power. Unlike Nvidia, the company is focused on inference, the computing required to run AI models in real-world applications, rather than training large language models.
That positioning could prove advantageous as enterprises increasingly shift from building large AI models to deploying them at scale, driving rising demand for efficient inference hardware.