Vertiv Launches Converged Infrastructure Platform for NVIDIA AI Factory Deployments
Vertiv has introduced simulation-ready power and cooling infrastructure models designed for NVIDIA's Vera Rubin DSX AI factory reference design. The Vertiv OneCore Rubin DSX platform features standardized 12.5MW building blocks aimed at accelerating AI data center deployments and reducing integration risks.
Key Points
- Vertiv OneCore Rubin DSX offers simulation-ready infrastructure models through NVIDIA's Vera Rubin DSX AI factory reference design
- Platform features standardized 12.5MW infrastructure building blocks that can scale from small AI clusters to gigawatt-scale facilities
- Solution integrates power, cooling, controls, and services into interdependent designs optimized across the full power and thermal chain
- Approach aims to compress deployment timelines, improve infrastructure utilization, and reduce field integration risks
- Digital validation capabilities enable real-time simulation and system-level modeling before physical deployment
Critical infrastructure provider Vertiv Holdings has unveiled a new converged physical infrastructure platform specifically designed for AI data centers, partnering with NVIDIA to deliver simulation-ready power and cooling systems. The announcement comes as AI factories face mounting pressure to scale rapidly while managing increasing power densities and operational complexity.
Standardized Building Block Architecture
The Vertiv OneCore Rubin DSX platform centers on standardized 12.5MW infrastructure blocks that can be combined and configured to support deployments ranging from smaller AI clusters to gigawatt-scale AI factories. This modular approach is designed to simplify scaling while maintaining deployment consistency and operational performance across different facility sizes. The building blocks integrate power distribution, cooling systems, and control mechanisms into validated, repeatable units.
Digital Simulation and Validation
A key differentiator of the platform is its integration with NVIDIA's Omniverse DSX Blueprint, enabling customers to validate AI factory infrastructure through real-time simulation and system-level modeling before beginning physical deployment. This digital-first approach allows operators to identify potential issues, optimize configurations, and improve operational confidence during the design phase, potentially reducing costly field modifications and deployment delays.
Converged Infrastructure Methodology
Vertiv's approach is built on five foundational elements: repeatable building blocks, defined interfaces, system orchestration, digital continuity, and lifecycle support. This methodology aims to address the growing complexity of AI infrastructure by treating power, cooling, and controls as interdependent systems rather than separate components. The company says this integration can improve coordination across infrastructure domains and optimize performance from grid connection through end-use computing.
Industry Impact
This partnership represents a significant shift toward standardized, simulation-validated infrastructure for AI data centers. As AI workloads drive unprecedented power densities—often exceeding 100kW per rack compared to traditional data center densities of 5-10kW—the industry is moving away from custom, project-by-project infrastructure design toward more standardized, validated approaches. Vertiv's 12.5MW building block standard could become an important industry benchmark, particularly as hyperscale operators seek to reduce deployment timelines from traditional 18-24 month cycles.
The emphasis on digital validation through NVIDIA's simulation platform addresses a critical pain point in AI infrastructure deployment. Field integration issues and performance optimization challenges have historically added months to project timelines and millions in cost overruns. By enabling comprehensive testing in virtual environments, this approach could significantly reduce execution risk and accelerate the deployment of AI capacity at a time when demand far exceeds available infrastructure.
Market Outlook
The success of this standardized approach could influence broader industry adoption of converged, simulation-validated infrastructure designs. As AI workloads continue to drive unprecedented infrastructure demands, with some estimates projecting AI data center power consumption to reach 85-134 TWh annually by 2030, the pressure for faster, more reliable deployment methodologies will intensify. The integration of digital twins and real-time simulation capabilities may become standard practice across the data center industry, extending beyond AI facilities to traditional enterprise and cloud infrastructure deployments.