Hyperscale data centers and high-performance workloads are pushing computing ever-further into uncharted thermal territory. Data centers have always been the backbone of the internet, providing the infrastructure for transfers of information, currency, and even jokes.
However, today’s high-performing computing tasks require ever more power to run efficiently. From training large artificial intelligence (AI) models to running genome mapping and financial simulations, the modern data center needs a lot of juice…and in turn, it generates significant amounts of heat. The traditional air-based cooling, which was adequate for older standard workloads, is reaching its limit as racks climb to ever-increasing temperatures.
Therefore, operators are turning to advanced cooling strategies like liquid cooling that can adapt on the fly to changing conditions. At the same time, complementary approaches like heat capture, which converts waste heat into a reusable resource, are emerging as powerful supplements.
When Traditional Cooling Systems Aren’t Enough
Hyperscale and AI data centers are packed to the gills with power-hungry components. The GPUs and CPUs used for AI training, weather forecasting, and other HPC tasks can draw immense amounts of power. Those requirements are multiplied by the number of processors in a single server, and then by the number of servers in a rack. As such, these high temperatures must be addressed to keep hardware within safe operating limits.
Unfortunately, this issue will only become more complicated as demands grow — it’s thought that the average power density per rack will continue to increase from 20 kW to 600 kW.
Historically, data centers have relied on traditional air cooling methods, which are now struggling to meet the increasing energy requirements. Currently, cooling alone accounts for 30-40% of a data center’s total electricity usage. And as they ramp up to meet the need, they pose a second problem: churning through more electricity to keep equipment at safe operating temperatures. This cuts into efficiency gains elsewhere, puts more pressure on the power grid to which a data center is connected, and negatively impacts Power Usage Effectiveness (PUE), driving up electricity bills for others in the region.
Cooling Strategies for AI & Hyperscale Workloads
As traditional air-cooling methods struggle to keep up with these ever-increasing demands, more data center operators are turning to liquid cooling systems as a potential solution. Liquids are four times more effective at carrying heat than air. Furthermore, when compared to air cooling, delivering coolant directly to servers can reduce the energy consumption associated with cooling by up to 30%.
Direct-to-chip is one of the most popular methods for liquid cooling. This closed-loop system circulates non-conductive, dielectric fluid through cold plates on heat-generating equipment, such as CPUs and GPUs. The cooling plates help transfer heat to the liquid, which then travels through a heat exchanger to cool down before getting recycled and sent back through the cycle.
Here at Bloom, we’re focused on helping data centers implement liquid cooling systems more effectively and efficiently. We use heat capture-enabled Energy Servers, which produce heat waste exhaust that can be used in combination with an absorption chiller to generate chilled liquid. This method helps reduce reliance on conventional air conditioning systems and electric chillers, thereby improving PUE and lowering cooling costs.
It’s important to note that while there are several other emerging technologies that may prove useful for hyperscale cooling, such as immersion cooling, they may not be widely used in the next two years.
New Power Needs for New Cooling Methods
Liquid cooling has attracted a lot of attention for its efficiency, but that doesn’t mean overall energy consumption will decrease. Far from it: AI and high-performance computing workloads will continue to grow. Newer, more powerful processors will produce more heat than their older counterparts, even if liquid cooling can remove that heat effectively. That means data center power demand will increase.
How are data center operators meant to counter that? It can’t be through continued reliance on the already overburdened power grid.
Reliable and sustainable on-site energy may be the answer. Solid oxide fuel cells, like those that power Bloom’s energy servers, are one such solution. They produce power without combustion, allowing hyperscale data centers to generate the electricity they need without increasing electric prices. In addition, Bloom’s server can utilize waste heat in absorption chilling applications, as previously discussed, helping data centers further improve efficiency.
By generating energy directly at the data center, operators gain a reliable and flexible electricity source that can support massive computing loads and the cooling they require. On-site power from Bloom Energy can reduce dependence on strained power grids while integrating waste-heat recovery, creating a sustainable foundation for the future of data center operations.
To learn more about Bloom’s onsite power generation, click here.


