December 2025 Update: Liquid cooling market reaching $5.5B in 2025, projected $15.8B by 2030 (23% CAGR). H200 GPUs at 700W TDP require liquid cooling at scale. NVIDIA Kyber rack (2027) will require 600kW-1MW per rack. Supermicro releasing 250kW CDUs doubling previous capacity. CDUs achieving 99.999% availability with triple-redundant architecture and 100ms failover.
2025 marks the year liquid cooling tipped from bleeding-edge to baseline. No longer limited to experimental deployments, liquid cooling now operates as a critical enabler for AI infrastructure.¹ Data center operators deploying NVIDIA H200 GPUs face 700W thermal loads per device that air cooling cannot cost-effectively remove at scale.² The trajectory intensifies as NVIDIA's Kyber rack, due in 2027, will require 600kW initially and scale to 1MW per rack, demanding liquid cooling infrastructure capable of removing unprecedented heat loads.³
The data center liquid cooling market reached $5.52 billion in 2025 with projections to $15.75 billion by 2030 at 23.31% CAGR.⁴ Alternative analyses project growth from $2.84 billion in 2025 to $21.15 billion by 2032 at 33.2% CAGR.⁵ Cooling Distribution Units (CDUs) form the central infrastructure enabling this transition, managing coolant circulation between facility water systems and IT equipment while maintaining the precise temperatures AI hardware demands.
How CDUs enable liquid cooling at scale
Coolant Distribution Units serve as the interface between facility cooling infrastructure and rack-level liquid cooling systems. CDUs manage heat transfer from the secondary loop serving IT equipment to the primary loop connected to facility chillers or cooling towers.
Heat exchange architecture employs 316 stainless steel plate heat exchangers compatible with various coolant fluids.⁶ The heat exchanger isolates IT cooling loops from facility water, preventing contamination while enabling efficient thermal transfer. Maximum flow rates reach 3,600 liters per minute for rapid heat absorption and transfer.⁷
Temperature control maintains precise conditions. CDUs achieve temperature ranges from -20°C up to +70°C depending on application requirements.⁸ Tight temperature control prevents thermal throttling in GPUs and maintains consistent compute performance.
Pressure management enables flexibility in installation. Greater than 50 psi head pressure allows longer runs between CDU and server racks.⁹ Dual variable speed drive (VSD) pumps provide dynamic response to cooling demand while enhancing energy efficiency.¹⁰
Reliability features ensure availability. Modern CDUs utilize triple-redundant architecture with 1:1 hot standby for critical components.¹¹ During main module failures, backup systems switch seamlessly within 100 milliseconds, achieving 99.999% system availability.¹²
Energy efficiency delivers operational savings. Compared with air-cooling units, CDUs consume 20-30% less electricity for equivalent cooling capacity.¹³ The efficiency gains compound across large AI deployments where cooling represents a significant portion of total power consumption.
CDU capacity for AI workloads
AI server power density drives CDU sizing requirements. A single NVIDIA GB200 GPU has a TDP of 1.2kW. A typical GB200 NVL72 server with 8 GPUs and 2 CPUs reaches 10kW total TDP.¹⁴ CDU capacity must scale to match entire rack populations of these systems.
Entry-level configurations address moderate densities. Boyd's 10U Liquid-to-Air CDU provides up to 15kW capacity depending on heat load and approach temperature requirements.¹⁵ Such units suit edge deployments or lower-density colocation environments.
Mid-range systems support high-density racks. Chilldyne's CF-CDU300 cools up to 300kW of servers.¹⁶ Within a standard 42U rack, systems achieving 50kW server cluster cooling enable substantial AI workload consolidation.¹⁷
High-capacity platforms serve hyperscale deployments. Motivair's CDUs offer six standard models and custom OEM configurations scaling to 2.3MW of IT load.¹⁸ Supermicro released NVIDIA Blackwell rack-scale solutions in June 2025 with 250kW CDUs doubling previous capacity.¹⁹
Enterprise-scale systems address data center-wide requirements. Trane's next-generation CDU delivers up to 10MW cooling capacity for direct-to-chip liquid cooling in hyperscale and colocation environments.²⁰
Installation planning requires attention to physical constraints. Ideal distance between CDU and racks should not exceed 20 meters.²¹ Floor load capacity must reach 800kg/m² since flooded CDUs can weigh up to 3 tons.²² Maintenance space requirements include 1.2 meters at front and rear plus 0.6 meters at top for piping connections.²³
Rear door heat exchangers for brownfield upgrades
Rear Door Heat Exchangers (RDHx) mount on server rack backs, removing heat from exhaust air before it enters the data center environment.²⁴ The technology enables liquid cooling benefits without replacing existing air-cooled servers.
Cooling efficiency substantially exceeds air-only approaches. Traditional air cooling operates 30-60% less efficiently than RDHx configurations.²⁵ The improvement compounds in high-density environments where air cooling struggles to maintain temperatures.
Capacity evolution addresses increasing rack densities. Motivair's ChilledDoor cools up to 72kW per rack.²⁶ OptiCool Technologies launched the industry's highest-capacity 120kW RDHx in September 2025, purpose-built for next-generation AI and HPC workloads.²⁷
Proprietary cooling approaches push performance boundaries. OptiCool's two-phase refrigerant design uses phase change thermodynamics, removing heat from racks and returning air at room-neutral ambient temperature.²⁸ The approach achieves higher thermal transfer efficiency than single-phase liquid systems.
Active versus passive designs offer different tradeoffs. Passive RDHx relies solely on server fan airflow, offering energy efficiency and simplicity.²⁹ Active RDHx incorporates built-in fans for higher thermal densities, consuming more power but providing greater flexibility for high-performance computing environments.³⁰
Legacy infrastructure compatibility makes RDHx attractive for brownfield deployments. Retrofitting existing air-cooled server racks costs less and causes less disruption than transitioning to liquid-cooled servers.³¹ AI inference workloads running on air-cooled hardware benefit from RDHx without facility-wide infrastructure overhaul.³²
Industry standardization accelerates through the Open Compute Project. The Door Heat Exchanger Sub-Project focuses on RDHx development, integration, and standardization within the ORV3 (Open Rack Version 3) framework.³³ Schneider Electric acquired a controlling interest in Motivair in February 2025 to enhance liquid cooling market position.³⁴
Immersion cooling for maximum density
Immersion cooling submerges servers in thermally conductive dielectric liquid within sealed tanks.³⁵ The approach enables highest-density deployments while dramatically reducing cooling energy consumption.
Single-phase immersion keeps liquid in liquid state throughout. Coolant circulation through heat exchangers removes absorbed heat.³⁶ The approach reduces electricity demand by nearly half compared to traditional air cooling, cuts CO2 emissions up to 30%, and supports 99% less water consumption.³⁷
Two-phase immersion boils liquid to vapor at heat sources. Condenser coils return vapor to liquid state.³⁸ Two-phase systems achieve higher efficiency at drawing large heat quantities, making them more suitable for HPC and AI infrastructure.³⁹
Density improvements transform data center economics. Immersion cooling enables operators to pack 10-15x more compute into the same footprint, directly translating to faster time-to-revenue for AI services.⁴⁰ The consolidation reduces real estate requirements while increasing per-square-foot capacity.
Energy efficiency reaches dramatic levels. According to Submer, immersion cooling reduces cooling system energy consumption by up to 95%.⁴¹ The savings offset higher capital costs over deployment lifetime.
Industry validation builds confidence. Intel and Shell validated a full immersion solution with hardware from Supermicro and Submer, establishing "Intel Data Center Certified for Immersion Cooling" as an industry standard for cooling efficiency.⁴² Submer introduced autonomous robots for immersion tank maintenance, simplifying server handling.⁴³
Cost considerations require careful analysis. Comprehensive immersion deployments require specialized tanks, load-bearing supports, leakage detection systems, and coolant handling equipment pushing per-rack installation costs past $50,000, roughly triple equivalent air systems.⁴⁴ Retrofitting live sites compounds complexity as floor plenums, cable trays, and power paths require rerouting while maintaining uptime.⁴⁵
Technology maturity continues advancing. Immersion remains relatively immature with minimal historical data on long-term performance and reliability.⁴⁶ However, accelerating adoption by hyperscalers and AI infrastructure providers builds operational experience rapidly.
The liquid cooling technology stack
Different cooling technologies address different deployment scenarios. The optimal approach depends on heat density, existing infrastructure, and operational requirements.
Cold plate cooling (direct-to-chip or D2C) represents the fastest-growing segment.⁴⁷ Cold plates attach directly to heat-producing components, circulating liquid to remove thermal load. The approach integrates with existing rack infrastructure more easily than immersion alternatives.
Hybrid architectures combine multiple approaches. CDUs serve cold plate systems for highest-heat components while RDHx handles remaining thermal load from air-cooled components. The combination maximizes cooling efficiency without requiring full infrastructure replacement.
OCP compliance ensures interoperability. Nidec developed a Project Deschutes CDU prototype compliant with Google Open Compute Project specifications, exhibited at SC25.⁴⁸ Standardized interfaces enable component interoperability across vendors.
Rack density evolution continues driving requirements. According to Omdia, racks below 10kW comprised 47% of installed capacity in 2024, dropping to 38% by 2025.⁴⁹ Meanwhile, 10-20kW racks rose from 27% to 30%, and 20-30kW racks climbed from 24% to 28%.⁵⁰ The density shift accelerates liquid cooling adoption.
Major CDU vendors and recent developments
The competitive landscape spans established thermal management companies and new entrants targeting AI infrastructure.
Vertiv provides comprehensive CDU solutions with educational resources explaining liquid cooling fundamentals. The company's AI Hub initiative positions CDU technology as central to next-generation infrastructure.⁵¹
Schneider Electric strengthened its liquid cooling position through the February 2025 Motivair acquisition.⁵² The combined portfolio addresses RDHx, CDU, and integrated liquid cooling solutions.
Supermicro released NVIDIA Blackwell rack-scale solutions with 250kW CDUs in June 2025.⁵³ The systems demonstrate integrated compute and cooling design for maximum-density deployments.
Trane offers enterprise-scale CDUs reaching 10MW capacity for hyperscale environments.⁵⁴ The company emphasizes energy efficiency and integration with facility-level thermal infrastructure.
Motivair developed the ChilledDoor RDHx reaching 72kW per rack alongside CDU platforms scaling to 2.3MW.⁵⁵ The Schneider acquisition positions the technology for expanded global deployment.
Submer specializes in immersion cooling with innovation including autonomous maintenance robots.⁵⁶ The Intel partnership validates the technology for enterprise deployment.
Deployment considerations
Organizations planning liquid cooling infrastructure should evaluate several factors:
Cooling load projection must account for AI workload growth. Current server heat loads may double or triple as GPU TDPs increase. Size CDU capacity for expected endpoint density rather than current requirements.
Facility integration requires mechanical engineering coordination. CDUs connect to facility chilled water systems and require appropriate supply temperature, flow rate, and return conditions. Validate facility capacity before committing to liquid cooling density targets.
Redundancy architecture ensures availability. Plan for N+1 or 2N CDU redundancy matching overall facility tier requirements. CDU failure should not cause thermal-related server downtime.
Coolant selection affects performance and maintenance. Different dielectric fluids offer different thermal properties, material compatibility, and environmental considerations. Evaluate total lifecycle cost including coolant replenishment.
Hybrid transition planning enables phased deployment. Start with RDHx for brownfield upgrades, add cold plate systems for new AI deployments, and plan for immersion capability as densities continue increasing.
Introl's global field teams deploy cooling infrastructure for AI installations across 257 locations, from initial liquid cooling pilots to 100,000-GPU facilities. CDU selection and integration directly impact thermal performance and operational efficiency.
The thermal imperative
AI workloads generate heat at rates that fundamentally exceed traditional data center cooling approaches. CDUs, RDHx systems, and immersion tanks form the technology stack enabling continued GPU density increases.
The market trajectory points clearly toward liquid cooling as the baseline for AI infrastructure. Organizations delaying liquid cooling investments risk stranded air-cooled facilities unable to support next-generation GPU deployments. Early planning for CDU capacity, facility integration, and operational procedures positions organizations to deploy AI infrastructure at the densities economics demand.
Cooling distribution infrastructure represents a strategic investment enabling compute density that determines AI project timelines and costs. The CDU selected today determines whether facilities can accommodate the GPU deployments planned for tomorrow.
Key takeaways
For facility engineers: - Data center liquid cooling market: $5.52B (2025) to $15.75B (2030) at 23.31% CAGR; alternative projections to $21.15B by 2032 - CDU capacity ranges: entry-level 15kW (Boyd 10U), mid-range 300kW (Chilldyne), high-capacity 2.3MW (Motivair), enterprise 10MW (Trane) - CDUs consume 20-30% less electricity than air cooling for equivalent capacity; triple-redundant architecture achieves 99.999% availability
For data center planners: - NVIDIA GB200 GPU TDP: 1.2kW; GB200 NVL72 server (8 GPUs + 2 CPUs): 10kW; NVIDIA Kyber rack (2027): 600kW-1MW - Installation constraints: CDU-to-rack distance ≤20m; floor capacity 800kg/m² for flooded CDUs up to 3 tons; maintenance clearance 1.2m front/rear - Racks <10kW dropped from 47% (2024) to 38% (2025); 20-30kW racks rose from 24% to 28%—density shift accelerates liquid cooling
For brownfield upgrades: - RDHx capacity evolution: Motivair ChilledDoor 72kW; OptiCool 120kW (September 2025)—industry highest - Traditional air cooling operates 30-60% less efficiently than RDHx configurations - Retrofitting costs less and causes less disruption than full liquid-cooled server transitions; AI inference benefits without facility overhaul
For immersion cooling: - Immersion enables 10-15x compute density in same footprint; reduces cooling energy up to 95% (Submer) - Single-phase reduces electricity ~50% vs air; two-phase achieves higher efficiency for HPC/AI workloads - Per-rack installation costs exceed $50,000 (3x air systems); limited historical data on long-term performance
For vendor selection: - Schneider Electric acquired Motivair (February 2025) for liquid cooling market position - Supermicro released NVIDIA Blackwell solutions with 250kW CDUs (June 2025) - Open Compute Project standardization: Door Heat Exchanger Sub-Project, Nidec Project Deschutes CDU prototype
References
-
Data Center Frontier, "Liquid Cooling Comes to a Boil: Tracking Data Center Investment, Innovation, and Infrastructure at the 2025 Midpoint," 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
-
MarketsandMarkets, "Data Center Liquid Cooling Market," 2025.
-
Fortune Business Insights, "Data Center Cooling Market Size, Share | Forecast Report [2032]," 2025.
-
Corestar Tech, "What is Coolant Distribution Units (CDU) for Data Center Cooling," 2025.
-
Corestar Tech, "What is CDU," 2025.
-
Boyd Corporation, "Coolant Distribution Unit (CDU)," product page, 2025.
-
Trane Commercial, "Coolant Distribution Unit (CDU)," product page, 2025.
-
Trane Commercial, "CDU," product page, 2025.
-
Corestar Tech, "What is CDU," 2025.
-
Corestar Tech, "What is CDU," 2025.
-
Nidec Corporation, "Coolant Distribution Units (CDU)," 2025.
-
JetCool, "The Ultimate Guide to Coolant Distribution Units (CDUs)," 2025.
-
Boyd Corporation, "CDU," product page, 2025.
-
Chilldyne, "FAQ Guide to Data Center Liquid Cooling," 2025.
-
Corestar Tech, "What is CDU," 2025.
-
Motivair, "Coolant Distribution Unit | CDU," product page, 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
-
Trane Commercial, "CDU," product page, 2025.
-
Corestar Tech, "What is CDU," 2025.
-
Corestar Tech, "What is CDU," 2025.
-
Corestar Tech, "What is CDU," 2025.
-
Supermicro, "What Are Rear Door Heat Exchangers (RDHx)?" 2025.
-
Schneider Electric, "Upgrade legacy data centers for AI workloads with RDHx liquid cooling," November 11, 2025.
-
Data Center Dynamics, "Your golden ticket to liquid cooling – enter through the rear door," 2025.
-
Business Wire, "OptiCool Launches Industry's Highest-Capacity 120kW Rear Door Heat Exchanger for AI and HPC Workloads," September 8, 2025.
-
OptiCool Technologies, "OptiCool Launches 120kW RDHx," September 2025.
-
Supermicro, "What Are RDHx?" 2025.
-
Supermicro, "What Are RDHx?" 2025.
-
Schneider Electric, "Upgrade legacy data centers for AI workloads with RDHx," November 2025.
-
Schneider Electric, "RDHx liquid cooling," November 2025.
-
Open Compute Project, "Door Heat Exchanger," project page, 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
-
Vertiv, "Immersion cooling systems: Advantages and deployment strategies for AI and HPC data centers," 2025.
-
eabel, "Immersion Cooling for Data Centers: A Comprehensive Guide," November 2025.
-
NorthC Data Centers, "Immersion cooling and liquid cooling: the future of AI data centers," 2025.
-
eabel, "Immersion Cooling for Data Centers," November 2025.
-
Futuriom, "The Datacenter Liquid Cooling Market Heats Up," November 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
-
IEEE Spectrum, "Data Center Liquid Cooling: The AI Heat Solution," 2025.
-
Intel Newsroom, "Intel and Shell Advance Immersion Cooling in Xeon-Based Data Centers," 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
-
Futuriom, "The Datacenter Liquid Cooling Market Heats Up," November 2025.
-
Nidec, "Nidec Develops Liquid-Cooling CDU Prototype Based on Google OCP Specification," 2025.
-
Schneider Electric, "Upgrade legacy data centers," November 2025.
-
Schneider Electric, "Upgrade legacy data centers," November 2025.
-
Vertiv, "Understanding Coolant Distribution Units (CDUs) for Liquid Cooling," 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
-
Trane Commercial, "CDU," product page, 2025.
-
Motivair, "CDU," product page, 2025.
-
Data Center Frontier, "Liquid Cooling Comes to a Boil," 2025.
URL Slug Options: 1. cooling-distribution-units-cdu-liquid-cooling-ai-data-center-2025 (primary) 2. cdu-rdhx-immersion-cooling-ai-gpu-infrastructure-2025 3. data-center-liquid-cooling-cdu-heat-exchanger-2025 4. ai-data-center-cooling-cdu-immersion-rdhx-2025