High-Performance Computing Data Centers

High-Performance Computing (HPC) centers have high server utilization rates and high-power requirements but typically low availability requirements. Once mainly used by universities and research centers, HPCs are increasingly being operated by private businesses.

Barriers

Legacy designs can be highly customized and sometimes limited in retrofit options. Racks have high computing densities and very high server utilization, generating intense heat and cooling loads. Water use can be quite high. Integration with mixed-use structures (offices, labs) can mean shared power and cooling infrastructure and the inability to run at warmer ambient temperatures with warmer cooling water.

Solutions

  • Owner/operator can move to newer, heat-tolerant IT components.
  • Waste heat can be diverted to other facilities and process uses.
  • Liquid cooling is an option, including moving from Computer Room Air Handler Units (CRAHs) to passive rear-door cooling with warm water to cooling at the chip.
  • Uninterrupted Power Supplies (UPSs) are less common, but energy savings still may be found in transformers, power distribution units, and reductions in voltage conversions.

Partner Examples and Additional Resources

Lawrence Berkeley National Laboratory: Retro-commissioning at the National Energy Research Scientific Computing Center
The National Energy Research Scientific Computing (NERSC) Center at Lawrence Berkeley National Laboratory (LBNL) outgrew its former data center space and underwent a retro-commissioning process to meet the data center's growing needs. As an HPC facility, NERSC faces unique energy management challenges due to its cooling system design and DOE’s requirements for high availability and high utilization.

LBNL’s Operational Data Analytics for Data Center Energy Management
To achieve and maintain lasting operational efficiency at NERSC, LBNL developed a sophisticated data analytics system – Operations Monitoring and Notification Infrastructure (OMNI) – to provide key operational insights about the data center's performance and energy efficiency.

National Renewable Energy Laboratory: Data Center Optimization at NREL Research Support Facility
Learn how after a decade of operation, NREL continues to identify opportunities to improve the Research Support Facility data center’s efficiency. As a result of these improvements, the data center has achieved an 84% reduction in PUE.

Oak Ridge National Laboratory: Oak Ridge Leadership Computing Facility
The Oak Ridge Leadership Computing Facility is home to Summit, a high-performance computing machine. Moving away from traditional air cooling, Summit features an innovative, warm water-cooling system, reducing the use of chilled water and inefficient chillers.

Taming The Energy Hog: What Every Organization Should Know To Address Data Center Energy Use
Watch this webinar to learn how Better Buildings Challenge and Accelerator partners have formed partnerships within their organizations and implemented measures to dramatically reduce data center energy usage.
 


Explore Other Data Center Types