Retro-commissioning at LBNL's National Energy Research Scientific Computing Center


The National Energy Research Scientific Computing (NERSC) Center is housed in the 140,000 gross-square-foot, LEED Gold Shyh Wang Hall at Lawrence Berkeley National Laboratory (LBNL). NERSC is the primary scientific computing facility for DOE’s Office of Science, with over 7,000 scientists performing a wide variety of research projects. As a high-performance computing (HPC) facility, NERSC faces unique energy management challenges due to its cooling system design and DOE’s requirements for high availability and high utilization.

Before moving to Shyh Wang Hall in 2015, NERSC was located in Oakland, California. Over time, NERSC outgrew the space, power, and cooling capacity of the Oakland facility. LBNL decided to design a new energy-efficient facility on the Lab’s main campus to meet NERSC’s growing needs.



When designing the new facility, engineers leveraged the temperate Berkeley, California climate to meet the data center’s cooling needs. This allowed NERSC to eliminate the need for compressor-based cooling, which is the most common cooling option for HPC data centers. The new facility is cooled by both outdoor air and cool water generated from the facility’s cooling tower plant. The cooled water is also used for direct cooling inside the high-performance computers.

Despite the significant cost savings, this cooling approach made indoor humidity control challenging. For this reason, the NERSC energy team leveraged its operational data analytics system – Operations Monitoring and Notification Infrastructure (OMNI) – to diagnose and solve indoor environmental control issues. Through OMNI, the facilities management team can access highly detailed data, allowing for more informed and timely decision making. This information allowed the facilities team to better understand the overlooked issues in the cooling system design under certain types of atmospheric conditions and adjust cooling control sequences accordingly.



To ensure optimal operational conditions, the facility team implemented an ongoing-commissioning process for cooling system troubleshooting. In 2016, a commissioning consulting firm’s assessment revealed a potential 2,800 MWh of annual energy savings, worth about $175,000 per year, and nearly 1 million gallons of annual water savings. So far, five out of the 10 recommended measures have been implemented, achieving 1,800 MWh of non-IT energy savings per year. The table below shows the associated energy, water, cost, and estimated Power Usage Effectiveness (PUE) reduction for the implemented measures.



In addition to saving energy and money, some of the new projects made controls more reliable and operations smoother. In the long term, this will save staff time when reacting to system issues, giving staff more time to focus on other priority tasks. The robust OMNI data system has enabled NERSC to use visualizations of historical IT and energy performance metrics data to identify more opportunities. The matrixed team collaboration model enabled the team to discover and make decisions on data-based insights such as collaborating with manufacturers in planning for the next generation HPC system.


Annual Energy Use

Baseline (2014)
1.34 PUE-1
Actual (2019)
1.08 PUE-1

Energy Savings:

PUE-1 Reduction: 77%

Sector Type

Data Centers


Berkeley, California

Project Size

20,000 Square Feet (HPC data center floor)

Financial Overview

$104,400 Cost Savings