Liquid Cooling Success at Lawrence Berkeley National Lab’s 50B-1275 Data Center


Building 50B, Room 1275 is Lawrence Berkeley National Laboratory’s (LBNL) enterprise data center used for in-house needs and scientific computing. The 5600-square-foot data center was rebuilt in the mid-1990s and has gone through a series of transitions and upgrades. During the 2010s, email and other business applications were transitioned to the cloud and other spaces. At this time, 1275 began hosting high-performance computing (HPC) clusters owned by various scientific groups at LBNL. As the data center’s HPC hosting increased, so did the facility’s overall power usage. To address the data center’s growing energy usage, LBNL worked to identify opportunities to meet cooling needs as efficiently as possible. Having efficient cooling systems in a data center helps to increase the lifespan of equipment and decrease the likelihood of system malfunction that could otherwise take the data center offline or impact the data that is being processed and stored.



The data center’s original cooling scheme used Computer-Room Air Conditioners (CRACs).  In this raised-floor data center, the CRACs would discharge their cold air into the underfloor plenum and reject their heat to water-cooled by the building’s cooling tower. Even though the CRACs had ample cooling capacity, poor air management across the data center meant cooling was not being delivered effectively.

Over time, improved air management helped ensure that cooling needs and capacity were in balance. In 2018, a cooling master plan was developed to help guide the data center’s increasing demand given the power and cooling constraints of the existing infrastructure. The plan includes steps that have already been taken at the data center as well as future considerations.


Completed elements of the plan include:

  • Moving UPS batteries and spinning disks, with relatively tight temperature requirements, to another space.
  • Operating the data center as an ASHRAE A2 (air-cooled, level 2) area. This temperature range enables compressor-free cooling year-round in the Berkeley climate. It also requires that IT equipment purchased for use in the space is specified for A2 allowable conditions.
  • Using active rear doors to achieve the recommended IT inlet temperature. Active doors have larger heat exchangers than passive doors to get a closer approach temperature (the difference between the entering cooling water temperature and the exiting air temperature), and a set of fans to provide the necessary airflow through the exchangers.
  • Using the larger heat exchangers in the active rear doors to realize higher temperature rise between the entering and exiting cooling water. This enables more cooling for the same water flow.

As the above are phased in, six of the seven CRACs have been removed, freeing up space in the computer room. It has also freed up both cooling capacity in the closed-loop water system (since much less fan and compressor heat will be added to the water) and power capacity (since much less power will be used by the doors than on the fans and compressors in the CRAC units).

Future Considerations:

  • As density climbs further, utilizing direct-to-chip water cooling for more effective and efficient heat removal. This cooling system is now available directly from server manufacturers.
  • Additional efficiency and operational improvements include:
    • Adding more variable-frequency drives (VFDs) to the tower water and closed-loop cooling water pumps,
    • Adding a second closed-loop cooling water-to-tower water heat exchanger to allow maintenance without shutdown, and
    • Adding a water filtration system to meet rear-door manufacturer specifications.


Through these upgrades, the 1275 data center has greater system reliability and resilience, making it less likely to be taken offline due to cooling inadequacies. With a more efficient cooling system, the data center is now able to allocate additional power towards its IT loads, enabling more computing to be performed to support the needs of LBNL’s researchers.  


Annual Energy Use

Baseline (2014)
0.43 PUE -1
Expected (2020)
0.33 PUE-1

Energy Savings:

77% Reduction in PUE-1

Sector Type

Data Center


Berkeley, California

Project Size

5,600 Square Feet

Outside image of Building 50 where LBNL's enterprise data center is housed