WSU Energy Program Logo
Bonneville Power Administration Logo
  • Home
  • About
  • Database
      • Browse
      • Energy Systems
        • Building Envelope
        • Electronics
        • HVAC
        • Irrigation
        • Lighting
        • Motors & Drives
        • Multiple Energy Systems
        • Power Systems
        • Process Loads & Appliances
        • Refrigeration
        • Transportation
        • Water Heating
      • Sector
        • Agricultural
        • Commercial
        • Industrial
        • Residential
        • Utility
  • TAG Portal
      • 2017 Residential Lighting TAG (#14)
      • 2016 Multifamily Building TAG (#13)
      • 2015-1 Commercial HVAC TAG (#11)
      • 2014 Residential Building TAG (#10)
      • 2014 Commercial Building TAG (#9)
      • 2013 Information Technology TAG (#8)
      • 2013 ALCS TAG (#7)
      • 2012 Smart Thermostat TAG (#6)
      • 2012 LED Lighting TAG (#5)
      • 2011 Energy Management TAG (#4)
      • 2010 HVAC TAG (#3)
      • 2009 HVAC TAG (#2)
      • 2009 Lighting TAG (#1)
  • Webinars
    • Webinar Archives
  • Glossary
>

Summary

Direct Server Cabinet Cooling

Data Center HVAC System Design: Servers Contained Within a Cabinet vs. Servers Not Contained Within a Room

A special cabinet that encases servers so only the servers are cooled, not the entire server room.

Synopsis:

Convection cooling with air is currently the predominant method of heat removal in most data centers. Air handlers force large volumes of cooled air under a raised floor (the deeper the floor, the lower the friction resistance) and up through perforated tiles in front of (or under) computer racks. Fans within the server racks or “blade cages” distribute the cool air across the electronics that radiate heat, perhaps with the help of heat sinks or heat pipes.

In-rack cooling utilizes a dedicated water-cooled fan-coil that is integral with the server rack. The fan is located at the bottom of the rack so the cool air blows up through the server and out the top. Depending on the product, chilled water or refrigerant is used as the cooling medium. Though there are exceptions, the majority of products do not bring liquid into the actual server rack. The air conditioner, with water connections, is housed in an adjacent, but separate enclosure. The equipment at the rack level is still air cooled. This system easily accommodates racks drawing 4-7 kW.

In-rack cooling is a very precise and efficient means of cooling servers in server rooms, providing cooling directly where it is needed without moving a large volume of air, thus saving fan energy. In some products, instead of constant speed fans, a system of sensors monitors temperature and ramps fan speed and water flow up or down accordingly. The idea is to reduce operating costs while improving effectiveness. Products have been available since 2007 and the installed base ranges from small computer rooms, to enterprise data centers, to high density wiring closets.

Energy Savings: 15%
Energy Savings Rating: Extensive Assessment  What's this?
LevelStatusDescription
1Concept not validatedClaims of energy savings may not be credible due to lack of documentation or validation by unbiased experts.
2Concept validated:An unbiased expert has validated efficiency concepts through technical review and calculations based on engineering principles.
3Limited assessmentAn unbiased expert has measured technology characteristics and factors of energy use through one or more tests in typical applications with a clear baseline.
4Extensive assessmentAdditional testing in relevant applications and environments has increased knowledge of performance across a broad range of products, applications, and system conditions.
5Comprehensive analysisResults of lab and field tests have been used to develop methods for reliable prediction of performance across the range of intended applications.
6Approved measureProtocols for technology application are established and approved.
Simple Payback, New Construction (years): 2.7   What's this?
Simple Payback, Retrofit (years): 9.1   What's this?

Simple Payback is one tool used to estimate the cost-effectiveness of a proposed investment, such as the investment in an energy efficient technology. Simple payback indicates how many years it will take for the initial investment to "pay itself back." The basic formula for calculating a simple payback is:

Simple Payback = Incremental First Cost / Annual Savings

The Incremental Cost is determined by subtracting the Baseline First Cost from the Measure First Cost.

For New Construction, the Baseline First Cost is the cost to purchase the standard practice technology. The Measure First Cost is the cost of the alternative, more energy efficienct technology. Installation costs are not included, as it is assumed that installation costs are approximately the same for the Baseline and the Emerging Technology.

For Retrofit scenarios, the Baseline First Cost is $0, since the baseline scenario is to leave the existing equipment in place. The Emerging Technology First Cost is the Measure First Cost plus Installation Cost (the cost of the replacement technology, plus the labor cost to install it). Retrofit scenarios generally have a higher First Cost and longer Simple Paybacks than New Construction scenarios.

Simple Paybacks are called "simple" because they do not include details such as the time value of money or inflation, and often do not include operations and maintenance (O&M) costs or end-of-life disposal costs. However, they can still provide a powerful tool for a quick assessment of a proposed measure. These paybacks are rough estimates based upon best available data, and should be treated with caution. For major financial decisions, it is suggested that a full Lifecycle Cost Analysis be performed which includes the unique details of your situation.

The energy savings estimates are based upon an electric rate of $.09/kWh, and are calculated by comparing the range of estimated energy savings to the baseline energy use. For most technologies, this results in "Typical," "Fast" and "Slow" payback estimates, corresponding with the "Typical," "High" and "Low" estimates of energy savings, respectively.

TAG Technical Score:  2.62

Status:

Details

Direct Server Cabinet Cooling

Data Center HVAC System Design: Servers Contained Within a Cabinet vs. Servers Not Contained Within a Room

A special cabinet that encases servers so only the servers are cooled, not the entire server room.
Item ID: 68
Sector: Commercial
Energy System: HVAC--Other HVAC Systems
Technical Advisory Group: 2009 HVAC TAG (#2)
Technical Advisory Group: 2013 Information Technology TAG (#8)
Average TAG Rating: 3.15 out of 5
TAG Ranking Date: 10/25/2013
TAG Rating Commentary:
  1. Very expensive and typically uses MORE ENERGY (if done in addition to air cooling rather than in-place of - which is how it is typically used).  This treats the symptom of a hot spot for poor data center planning.  This occurs when a rack power density exceeds the data centers cooling capacity (watts/sq-ft) creating a hot spot.  This can be avoided by using less dense rack loading.  ONLY needed if space constrained which is rare.
  2. Most likely to be used in a new build out.  I would estimate that this technology has limited applicability in the retrofit market because of the reluctance to disrupt an operational data center.
  3. Practical; issues with water in a data center (end user concerns); costly to implement.
  4. Lots of resistance to this in the industry. Also, not clear on economics. If the numbers can be made to work has great potential.
  5. Most direct server cabinet cooling systems are not ET's but some are.  We provide incentives for this in new construction projects but not retrofit projects

For ET #158, same technology:

  1. Practical; issues with water in a data center (end user concerns); costly to implement
  2. Sounds like a good idea if institutional barriers and high expense can be overcome.
  3. This is an ET.
  4. Very difficult to implement in smaller DC's
  5. These savings are always shown compared to simple CRAC units - but I don't think this is much better than 100% OA cooling in the Pacific NW with really good airflow management

Synopsis:

Convection cooling with air is currently the predominant method of heat removal in most data centers. Air handlers force large volumes of cooled air under a raised floor (the deeper the floor, the lower the friction resistance) and up through perforated tiles in front of (or under) computer racks. Fans within the server racks or “blade cages” distribute the cool air across the electronics that radiate heat, perhaps with the help of heat sinks or heat pipes.

In-rack cooling utilizes a dedicated water-cooled fan-coil that is integral with the server rack. The fan is located at the bottom of the rack so the cool air blows up through the server and out the top. Depending on the product, chilled water or refrigerant is used as the cooling medium. Though there are exceptions, the majority of products do not bring liquid into the actual server rack. The air conditioner, with water connections, is housed in an adjacent, but separate enclosure. The equipment at the rack level is still air cooled. This system easily accommodates racks drawing 4-7 kW.

In-rack cooling is a very precise and efficient means of cooling servers in server rooms, providing cooling directly where it is needed without moving a large volume of air, thus saving fan energy. In some products, instead of constant speed fans, a system of sensors monitors temperature and ramps fan speed and water flow up or down accordingly. The idea is to reduce operating costs while improving effectiveness. Products have been available since 2007 and the installed base ranges from small computer rooms, to enterprise data centers, to high density wiring closets.

Baseline Example:

Baseline Description: Chilled air supplied underfloor through perforated floor panels, ceiling return
Baseline Energy Use: 810 kWh per year per square foot

Comments:

Convection cooling with air is currently the predominant method of heat removal in most data centers. Air handlers force large volumes of cooled air under a raised floor (the deeper the floor, the lower the friction resistance) and up through perforated tiles in front of (or under) computer racks. Fans within the server racks or “blade cages” distribute the cool air across the electronics that radiate heat, perhaps with the help of heat sinks or heat pipes.  The warmed air rises to the ceiling where it is returned to the computer room air handlers to be recooled.

Baseline and energy savings is based on energy use of a "typical" data center as decided as standard by E3T IT TAG team. The energy use of a full data center is 1500 kWh/sf/yr. The baseline for this technology is the HVAC portion of that, which is 54%, or 810 kWh/sf/yr. (WSU EEP, 2013)

Manufacturer's Energy Savings Claims:

"Typical" Savings: 30%
Savings Range: From 10% to 80%

Comments:

Savings are stated three ways: fan energy saving, chiller energy savings, or both.  Manufacturer's estimates of savings could be a representation of one or both of these primary energy users in data center temperature management.  For example, "When combined with Motivair Free Cooling chiller annual energy savings up to 93% are possible versus traditional CRAC systems."  The savings of 93% of the HVAC energy includes a water side economizer.  Also, information from 42U estimates 20% to 50%, and as high as 80% savings, again including an economizer in their maximum energy savings potential.  For the purpose of representing comparative savings, only the fan and chiller are used, resulting in a manufacturer's purported savings of 30% (42U, 2013).

Best Estimate of Energy Savings:

"Typical" Savings: 15%
Low and High Energy Savings: 10% to 50%
Energy Savings Reliability: 4 - Extensive Assessment

Comments:

Cooling equipment is typically controlled to maintain room temperature, with averaging wall stats.  The cold air is typically supplied under the floor and the operators need to locate the perforated floor panels to direct the cold air where needed most.  This loose control of temperature results in over cooling some racks since the cold air is not directed only to the hot spots. In other words, if the server is not so hot, for a given cfm, the discharge temperature, T2, would be colder than for a 'hot' server, thereby cooling the not-so-hot server more than necessary.  If the goal is to only provide enough cooling such that a server is maintained at less than 80 degrees, then only providing enough cold air to maintain that discharge air temperature will save energy.  From studies like (Henry Coles, 2010-10-26), we find in-rack cooling provides about 15% energy savings.

The savings depends on many factors.  For example, if a data center has hot spots, the central system will over cool the non-hot spots to meet the load of the hot area.  Larger data centers will benefit more from this product than smaller data centers.  Another factor that will impact savings is the diligence of the data center operators with balancing air flow based on the needs of each rack.  Other factors that impact savings include: server loading, fan and chiller equipment efficiency, and ambient temperatures.

Energy Use of Emerging Technology:
688.5 kWh per square foot per year What's this?

Energy Use of an Emerging Technology is based upon the following algorithm.

Baseline Energy Use - (Baseline Energy Use * Best Estimate of Energy Savings (either Typical savings OR the high range of savings.))

Technical Potential:
Units: square foot
Potential number of units replaced by this technology: 4,362,704
Comments:

We have not been able to find accurate data for square footage of data centers in the Northwest. The best, most up-to-date estimate of space in the US we could find is from DataCenterDynamics (DCD, 2014 Pg 4). According to this report, the total "white space" in the US is 109,067,617 sf. To convert to the Northwest, we use a standard of 4% of national data, based on relative population. In this case, the Northwest probably has more than its share of data centers, so we could probably justify a higher number. However, we are not likely to be serving the mega data centers over 100,000 sf. As an initial approximation, we will use 4%, which gives a total floor space of non-mega data centers in the Northwest of 4,362,704 sf.

Regional Technical Potential:
0.53 TWh per year
61 aMW
What's this?

Regional Technical Potential of an Emerging Technology is calculated as follows:

Baseline Energy Use * Estimate of Energy Savings (either Typical savings OR the high range of savings) * Technical Potential (potential number of units replaced by the Emerging Technology)

First Cost:

Installed first cost per: square foot
Emerging Technology Unit Cost (Equipment Only): $100.00
Emerging Technology Installation Cost (Labor, Disposal, Etc.): $0.00
Baseline Technology Unit Cost (Equipment Only): $70.00

Comments:

The baseline is no cases around the servers and is estimated at $70/sf, per RS Means.  The added cost for the cabinets is estimated to add an additional $30 per sq ft. The $30 is for the cabinets, controls, additional piping, etc.  Also, the raised floor may still be preferred so that a leak would not occur over the servers.  Installation can occur in stages.

Cost Effectiveness:

Simple payback, new construction (years): 2.7

Simple payback, retrofit (years): 9.1

What's this?

Cost Effectiveness is calculated using baseline energy use, best estimate of typical energy savings, and first cost. It does not account for factors such as impacts on O&M costs (which could be significant if product life is greatly extended) or savings of non-electric fuels such as natural gas. Actual overall cost effectiveness could be significantly different based on these other factors.

Detailed Description:

This ET is for a special cabinet that encases servers so that only the servers are cooled, and not the entire server room.  Each server rack receives only as much cooling as needed to maintain the maximum temperature.  A few years ago, it was not uncommon to find that IT managers of private server rooms were very averse to 'trying something new', and were willing to accept the inefficiencies of traditional methods in favor of reliability and familiarity.  But, with the recognition and support of this technology, and documented energy savings and reliability, the trend is slowly changing. 

As server racks get denser and cooling demands rise, this ET makes a good alternative to adding more computer room air conditioning (CRAC) and chiller capacity.

Product Information:
42U, In Row Cooling Infinti, Direct Cabinet Cooling Solutions Data Center Resources, DirectCool Cabinet System

Standard Practice:

Standard practice is to supply cool air under the floor and direct the cool air up through perforated floor panels that may be strategically located at the hot spots.  As the cool air picks up heat, it rises to the ceiling and is returned to the air handler to be recooled.

Development Status:

Available by more than one manufacturer, this concept has been available for about a decade.  This product is conducive to retrofit in stages, partitioning the space as the whole room gets upgraded.

Non-Energy Benefits:

If the IT Manager is okay with routing chilled water piping over his servers, there would be the savings of a raised floor.  Also, since the air flow is directed through the cabinets, an above ceiling space for return air ductwork might be a little smaller.  We estimate that the savings would be about 15-20%, and equipment is usually sized with this safety factor, therefore, we might find engineers comfortable with downsizing the primary equipment saving on first cost of the chilled water system.

End User Drawbacks:

This product allows for tight control of temperatures.  Therefore, there are many sensors that need to be monitored and calibrated regularly.

Operations and Maintenance Costs:

Baseline Cost: $0.00 per: square foot per year
Emerging Technology Cost: $2.00 per: square foot per year

Comments:

These cabinets come with top, side, door or slide out fan trays that need to be maintained.  All the primary cooling equipment is the same for this ET and the Baseline.  To keep the controls calibrated, estimate about one week, twice a year for a typical 5,000 sf data center.  However, it is not expected this will create jobs, but rather be absorbed into the daily routine of the existing personnel.  There will be replacement of sensors, actuators and server fans that can easily be replaced by the existing operators.  Determining a failed or failing component would be part of the training needed for the on-site operators.

Effective Life:

Anticipated Lifespan of Emerging Technology: 20 years

Comments:

With care, the cabinets and associated controls can last for decades.

Competing Technologies:

Computer Room Air Conditioners (CRAC), such as Leibert Corporation, Data Aire Inc., etc.  These units are spaced around the room to supply cold air under the floor, using perforated floor panels to direct the cold air at the hot spots.

Reference and Citations:

EERE, 06/03/2010. Data Center Rack Cooling with Rear-door Heat Exchanger
Energy Efficiency & Renewable Energy

E Source, 05/01/2012. Managing Energy Costs in Data Centers
E Source

Magnus Herrlin, 01/30/2006. Rack Cooling Effectiveness in Data Centers and Telecom Central Offices: The Rack Cooling Index (RCI)
American Society of Heating, Refrigerating and Air-Conditioning Engineers

David Moss, 08/09/2010. A Comparison of Room-, Row-, and Rack-Based Data Center Cooling Products
Dell Data Center Infrastructure

Henry Coles, 10/26/2010. Demonstration of Alternative Cooling for Rack-Mounted Computer Equipment
Lawrence Berkeley National Laboratory

EERE, 03/27/2012. Improving Data Center Efficiency with Rack or Row Cooling Devices
Energy Efficiency & Renewable Energy

PNNL, 07/01/2011. What is the Energy Smart Data Center Project Researching?
US DOE , 1

Sullivan, 2/4/2010. Energy Star for Data Centers
Energy Star , 1

WSU EEP, 12/06/2013. Standard Energy Usage Numbers for E3TNW
Washington State University Energy Program

DCD, 01/22/2014. Global Data Center Space 2013
DatacenterDynamics

Rank & Scores

Direct Server Cabinet Cooling

2013 Information Technology TAG (#8)


Technical Advisory Group: 2013 Information Technology TAG (#8)
TAG Ranking: 12 out of 57
Average TAG Rating: 3.15 out of 5
TAG Ranking Date: 10/25/2013
TAG Rating Commentary:

  1. Very expensive and typically uses MORE ENERGY (if done in addition to air cooling rather than in-place of - which is how it is typically used).  This treats the symptom of a hot spot for poor data center planning.  This occurs when a rack power density exceeds the data centers cooling capacity (watts/sq-ft) creating a hot spot.  This can be avoided by using less dense rack loading.  ONLY needed if space constrained which is rare.
  2. Most likely to be used in a new build out.  I would estimate that this technology has limited applicability in the retrofit market because of the reluctance to disrupt an operational data center.
  3. Practical; issues with water in a data center (end user concerns); costly to implement.
  4. Lots of resistance to this in the industry. Also, not clear on economics. If the numbers can be made to work has great potential.
  5. Most direct server cabinet cooling systems are not ET's but some are.  We provide incentives for this in new construction projects but not retrofit projects

For ET #158, same technology:

  1. Practical; issues with water in a data center (end user concerns); costly to implement
  2. Sounds like a good idea if institutional barriers and high expense can be overcome.
  3. This is an ET.
  4. Very difficult to implement in smaller DC's
  5. These savings are always shown compared to simple CRAC units - but I don't think this is much better than 100% OA cooling in the Pacific NW with really good airflow management


Technical Score Details

TAG Technical Score: 2.6 out of 5

How significant and reliable are the energy savings?
Energy Savings Score: 3.0 Comments:

  • Often overstated, but nevertheless reduced fan energy use is quantifiable and real.

How great are the non-energy advantages for adopting this technology?
Non-Energy Benefits Score: 2.6
Comments:

  • Greater redundancy would be providedby reliance on more diverse HVAC system components
  • This type of equipment is in factsold primarily as a means of solving a localized cooling problem, usuallybrought about because the IT manager has loaded a cabinet beyond the design, orout of range with the balance of the center. That said, I have never seen these used in small server rooms or evenlocalized data centers - these get installed in enterprise data centers as astopgap cooling solution.
  • Could free-up space and capacity forexpansion, and prevent hot-spots.

How ready are product and provider to scale up for widespread use in the Pacific Northwest?
Technology Readiness Score: 3.1
Comments:

  • Technology is readily available for fan coil units at each rack. liquid cooling technologies direct to server are more specialized and not ready for widespread adoption. Energy codes requiring air side economizer use discourage or eliminate adoption of this technology.
  • Plenty o'vendors!

How easy is it to change to the proposed technology?
Ease of Adoption Score: 2.1
Comments:

  • Requires plumbing of chilled water lines...this is not an easy fix.
  • Commercial cabinet level solutions are costly
  • Could be very time consuming, consolidating high-density racks and piping chilled water to cabinets

Considering all costs and all benefits, how good a purchase is this technology for the owner?
Value Score: 2.3
Comments:

  • No cost data available to compare a direct to rack approach compared to more traditional cooling system approaches.
  • Based on the benefits of solving a localized cooling issue.



Completed:
12/4/2013 3:56:13 PM
Last Edited:
12/4/2013 3:56:13 PM

2009 HVAC TAG (#2)


Technical Advisory Group: 2009 HVAC TAG (#2)
TAG Ranking:
Average TAG Rating:
TAG Ranking Date:
TAG Rating Commentary:

Contact
Copyright 2023 Washington State University
disclaimer and privacy policies

Bonneville Power Administration Logo