Vertiv Design
Principles
Power into a data center is segmented into capacity blocks, commonly 1-3 MW and determined by industry-standard sizing of breakers or generators.
AI is deployed in clusters, soon to be common at 100+ kW / rack, and upward from there.
Aligning clusters to capacity blocks can ensure that every available kW can be utilized.
Eliminate stranded power by aligning AI Clusters to data center capacity blocks.
Power, cooling, and AI hardware compete for limited space and energy.
A wholistic power and cooling design approach is required to maximize the share of space and energy dedicated to AI processing.
Optimize AI infrastructure by assuring the power and cooling technology is built and deployed to work together.
Design power and cooling together.
Design for the future.
Manage AI workload surges.
Plan for the variance AI workloads can require with system-level controls including power and cooling buffers.
AI ‘training’ tends to drive large numbers of processors to act in unison, creating massive power consumption surges which can repeat and degrade both the performance and lifespan of power and cooling infrastructure.
Designs for mitigation include system level controls with rapid response, plus immediately accessible buffers in power and cooling capacity.
Include liquid and air cooling.
The value of AI hardware, at $1-4M+ per rack, and the processing it supports, is driving increased consideration of redundancy in power & cooling designs, especially for inference applications.
Designs that limit blast radius, or the impact from loss of a single capacity segment (server, rack, row), tend to use higher counts of smaller components, potentially at the expense of total cost of ownership.
Designs that favor total cost of ownership tend to use lower counts of larger components, often with redundancy to reduce the possibility of a loss of a capacity segment.
Balance cost, redundancy, and risk.
Power into the data center equals the heat rejected.
Air and liquid cooling temperatures and flows must stay within the operating envelope of both the AI servers and the data center heat rejection equipment.
Plan today to accommodate future growth and high-density demand.
The typical data center life span is almost two decades. The AI chip design cycle is less than two years.
Eliminate stranded power.
Eliminate stranded power.
Balance cost, redundancy, and risk.
Manage AI workload surges.
Include liquid and air cooling.
Design for the future.
Design power and cooling together.
Explore how implementing
these principles can optimize
the strategic deployment of
AI workloads.
Power, cooling, and AI hardware compete for limited space and energy.
A wholistic power and cooling design approach is required to maximize the share of space and energy dedicated to AI processing.
Optimize AI infrastructure by assuring the power and cooling technology is built and deployed to work together.
Design power and cooling together.
Eliminate stranded power.
Balance cost, redundancy, and risk.
Manage AI workload surges.
Include liquid and air cooling.
Design for the future.
Design power and cooling together.
Balance cost, redundancy, and risk.
Include liquid and air cooling.
Design for the future.
Power into a data center is segmented into capacity blocks, commonly 1-3 MW and determined by industry-standard sizing of breakers or generators.
AI is deployed in clusters, soon to be common at 100+ kW / rack, and upward from there.
Aligning clusters to capacity blocks can ensure that every available kW can be utilized.
Eliminate stranded power by aligning AI Clusters to data center capacity blocks.
Eliminate stranded power.
Eliminate stranded power.
The value of AI hardware, at $1-4M+ per rack, and the processing it supports, is driving increased consideration
of redundancy in power & cooling designs, especially for inference applications.
Designs that limit blast radius, or the impact from loss of a single capacity segment (server, rack, row), tend to use higher counts of smaller components, potentially at the expense of total cost of ownership.
Designs that favor total cost of ownership tend to use lower counts of larger components, often with redundancy to reduce the possibility of a loss of a capacity segment.
Balance cost, redundancy, and risk.
Include liquid and air cooling.
Design for the future.
Balance cost, redundancy, and risk.
Include liquid and air cooling.
Design for the future.
Eliminate stranded power.
AI ‘training’ tends to drive large numbers of processors to act in unison, creating massive power consumption surges which can repeat and degrade both the performance and lifespan of power and cooling infrastructure.
Designs for mitigation include system level controls with rapid response, plus immediately accessible buffers in power and cooling capacity.
Plan for the variance AI workloads can require with system-level controls including power and cooling buffers.
Manage AI workload surges.
Power into the data center equals the heat rejected.
Air and liquid cooling temperatures and flows must stay within the operating envelope of both the AI servers and the data center heat rejection equipment.
Include liquid and air cooling.
Design for the future.
Eliminate stranded power.
Balance cost, redundancy, and risk.
Eliminate stranded power.
Balance cost, redundancy, and risk.
Include liquid and air cooling.
Plan today to accommodate future growth and
high-density demand.
The typical data center life span is almost two decades. The AI chip design cycle is less than two years.
Design for the future.
Explore how implementing
these principles can optimize
the strategic deployment of
AI workloads.
Vertiv Design
Principles
Design power and cooling together.
Design power and cooling together.
Design power and cooling together.
Design power and cooling together.
Manage AI workload surges.
Manage AI workload surges.
Manage AI workload surges.
Manage AI workload surges.