NVIDIA deepens CoreWeave partnership with $2B equity investment
NVIDIA has expanded its partnership with AI cloud provider CoreWeave, committing $2 billion through the purchase of CoreWeave Class A common stock at $87.20 per share. The move comes as demand for AI computing infrastructure accelerates globally and as cloud and enterprise customers seek reliable access to large-scale GPU capacity.
The companies said the investment is designed to speed the buildout of what they describe as “AI factories”—large, purpose-built data centers optimized for training and running advanced AI models. CoreWeave is targeting more than 5 gigawatts of AI capacity by 2030, deploying multiple generations of NVIDIA accelerated computing platforms.
Building AI factories at industrial scale
The expanded collaboration formalizes a deeper alignment between chipmaker and cloud operator at a time when the industry is shifting from experimentation to production-grade AI systems. In a statement, Michael Intrator, co-founder, chairman and CEO of CoreWeave, said the partnership has been anchored in joint design across the software and infrastructure stack.
“From the very beginning, our collaboration has been guided by a simple conviction: AI succeeds when software, infrastructure and operations are designed together,” Intrator said. He added that NVIDIA remains “the leading and most requested computing platform” across the AI lifecycle and pointed to the company’s latest architectures as key to lowering inference costs.
For NVIDIA, the deal reinforces its strategy of ensuring its GPUs, networking and software remain central to the next phase of AI deployment, particularly as more workloads move from training to inference and as enterprises seek predictable performance and cost.
What the agreement includes
Multi-generation hardware roadmap
Under the agreement, CoreWeave will deploy multiple generations of NVIDIA hardware across its cloud platform. The companies also referenced future systems, including the Rubin platform, Vera CPUs, and BlueField storage technologies, signaling that the relationship extends beyond today’s GPU clusters into next-generation architectures and data-center components.
Support for land, power and physical infrastructure
In addition to supplying computing platforms, NVIDIA will support CoreWeave in securing critical inputs for rapid data-center construction, including land, power and physical infrastructure. Access to energy and suitable sites has become one of the main bottlenecks for new AI data-center capacity, as hyperscale-style builds compete with broader grid constraints.
Software validation and reference architectures
The collaboration also extends to software. CoreWeave said its AI-native tools—such as its Mission Control platform and internal software stack—will be tested and validated alongside NVIDIA reference architectures. The stated goal is to make these tools available to cloud service providers and enterprise customers globally, broadening adoption of standardized deployment patterns for AI infrastructure.
Executive views: demand signals and an “AI industrial revolution”
Jensen Huang, founder and CEO of NVIDIA, framed the partnership as part of a broader buildout cycle. “AI is entering its next frontier and driving the largest infrastructure buildout in human history,” Huang said, describing CoreWeave as having “deep AI factory expertise” and “unmatched execution velocity.” He added that the companies are “racing to meet extraordinary demand” for NVIDIA AI factories.
The comments reflect a widening consensus that infrastructure—not algorithms alone—will determine near-term AI competitiveness. As model sizes grow and inference use cases proliferate, the market has increasingly rewarded providers able to deliver dependable capacity, efficient networking and fast deployment timelines.
Why this matters for the AI cloud ecosystem
CoreWeave has built a reputation as a purpose-built cloud provider for AI workloads, catering to AI labs, startups and large enterprises running compute-intensive models. Since going public in 2025, the company has positioned itself as a specialized infrastructure partner focused on GPU availability, performance tuning and operational tooling tailored to AI.
For NVIDIA, an equity investment adds another lever of influence in the AI cloud ecosystem. Beyond selling chips, the company has expanded into full-stack offerings—spanning networking, software frameworks and reference designs—aimed at standardizing how AI data centers are built and operated. A tighter partnership with a fast-scaling AI cloud operator can help reinforce that stack as customers decide where to place long-term workloads.
The agreement also highlights the strategic importance of execution speed. With multi-year lead times for power procurement and facility development, the ability to secure sites and bring capacity online quickly has become a competitive differentiator, particularly for customers facing model deployment deadlines and rapidly rising inference demand.
Outlook
While the companies did not disclose a detailed construction schedule or specific locations tied to the 5GW goal, the expanded partnership suggests both sides expect sustained demand for AI compute well into the next decade. If CoreWeave achieves its stated capacity target by 2030, it would represent one of the larger dedicated AI infrastructure buildouts in the market, with NVIDIA platforms at its core.
As AI shifts from development to production, industry observers will watch whether partnerships like this can ease supply constraints, stabilize pricing, and accelerate the rollout of standardized “AI factory” designs across regions and customer segments.










