Two-Phase D2C Cooling in Data Center Thermal Management Cost Analysis
Dec 22, 2025
IDTechEx estimates that around $55,000 is needed for cooling components per GB200 server rack
As AI workloads continue to scale, driven by GPU-dense platforms such as NVIDIA's GB300, G200, and H200 servers, thermal management has become one of the most critical and costly elements of data center infrastructure. Power densities per rack designed for high-end AI applications are now routinely exceeding 100 kW, far beyond the practical limits of traditional air cooling. This shift is accelerating adoption of direct-to-chip (D2C) liquid cooling, particularly two-phase solutions, which are increasingly viewed as essential rather than optional.
The cost analysis of server cooling components highlights an important reality: while cold plates often receive the most attention, they represent only a fraction of the total thermal system cost. The true economic picture emerges when server-level cooling is aggregated to the rack and facility level.
Server-Level Costs: Why Cold Plates Are Only the Beginning
At the server level, IDTechEx estimates place the average selling price (ASP) of cold plates at roughly US$300-500 per unit, depending on whether the plate is used for CPUs or GPUs. GPU cold plates sit at the upper end of this range due to higher heat flux and more demanding thermal design requirements. Importantly, only copper is used in GPU cold plates. Aluminum, while cheaper and lighter, poses an unacceptable risk of galvanic corrosion when combined with other metals in liquid cooling loops, particularly over long operational lifetimes.
For a representative 8-GPU AI server, the cold plate content value reaches approximately US$2,300, reflecting not just the plates themselves but the full direct-to-chip assembly. When normalized per chip, D2C liquid cooling systems typically fall in the US$200-400 per chip range, consistent with the text provided. CPU cold plates trend toward the lower end, while GPU plates push toward the upper bound.
Crucially, this cost includes far more than machined copper blocks. Hoses, fittings, fluid distribution manifolds inside the server, and especially quick disconnects (QDs) all contribute significantly. QDs are a major cost driver due to their precision engineering, leak-proof requirements, and increasing diameter as flow rates rise. While a single cold plate may appear relatively inexpensive, the number and size of QDs required for GPU-dense servers can rapidly inflate system cost.
Fans, by contrast, are almost negligible from a cost perspective, around US$60 per server, and increasingly play only a secondary role, handling residual air cooling for non-liquid-cooled components.
From Server to Rack: Cost Multiplication Effects
When server-level thermal content is scaled to the rack level, the economics become much clearer. With roughly eight AI servers per rack, server thermal content alone reaches approximately US$18,880 per rack. But this is only the starting point.
Rack-level infrastructure adds substantial incremental cost. A coolant distribution unit (CDU) installed in-rack typically carries an ASP in the US$15,000-30,000 range, with IDTechEx estimating a representative content value of US$23,000. In addition, manifolds, which manage fluid delivery and return across servers, add another US$10,000-20,000, or around US$15,000 per rack.
Taken together, total rack-level thermal content approaches US$56,880, underscoring how cooling infrastructure has evolved into a capital-intensive subsystem on par with computer hardware itself. This shift has major implications for data center design, vendor selection, and long-term total cost of ownership (TCO).

Cost analysis of cooling components. Source: IDTechEx, Thermal Management for Data Centers 2026-2036
Two-Phase D2C Cooling: Heat Flow Beyond the Rack
The second chart illustrates the heat dissipation path for two-phase D2C cooling, highlighting how thermal management extends well beyond the server chassis. Heat generated by AI accelerators is absorbed directly at the chip, where refrigerant undergoes a phase change, efficiently removing large amounts of thermal energy.

An overview of the heat dissipation path for two-phase D2C cooling. Source: IDTechEx, Thermal Management for Data Centers 2026-2036
From there, several architectural pathways exist:
- Rack CDUs (refrigerant-to-liquid) transfer heat into facility water loops, feeding into chillers and ultimately outdoor heat rejection units such as cooling towers or dry coolers.
- Row-level CDUs aggregate multiple racks, improving efficiency and reducing duplication of equipment.
- In hybrid configurations, a small fraction (typically <20%) of residual heat may still be handled by air cooling through CRAH or CRAC units, though this is increasingly minimized in next-generation AI data halls.
Two-phase refrigerant-to-liquid approaches are generally more thermally efficient than refrigerant-to-air solutions, though they require more sophisticated plumbing and controls. The chart makes clear that cooling is no longer confined to the "IT side" of the data center, and it is deeply intertwined with building-level and outdoor infrastructure.
Looking Ahead: Strategic Insights from IDTechEx
The trends illustrated here are explored in depth in IDTechEx's report "Thermal Management for Data Centers 2026-2036." The report provides a detailed forecast of cooling technologies, cost structures, and adoption pathways across hyperscale, colocation, and enterprise data centers. It also examines material choices, supply chain dynamics, and the competitive landscape for CDUs, cold plates, and two-phase cooling solutions.
As AI infrastructure continues its rapid expansion, understanding the true cost and architecture of thermal management will be critical. Cooling is no longer a background utility, it is a defining factor in data center performance, economics, and sustainability over the next decade.
For more information on this report, including downloadable sample pages, please visit www.IDTechEx.com/TMDC, or for the full portfolio of research available from IDTechEx, see www.IDTechEx.com/Research/Thermal.