Cadastre-se agora para um orçamento mais personalizado!

Server Density Fine Print...

Mar, 17, 2016 Hi-network.com

The role of IT is to maximize business capacity, and refreshing your servers is an opportunity to increase capacity, reduce hardware and most importantly, lower costs for the business. It is reasonable to assume that packing more physical servers into less rack space helps you accomplish these goals.

Here is the problem: If you increase server density by repackaging it into a smaller physical chassis, it does not mean you get more servers into fewer racks. Even with moderately dense blade and rack solutions today, you run out of available power (and more critically) available cooling, long before you run outof physical space. You might say 'we are building a new data center that is sure to offer higher density' or 'we are moving to a new co-location facility and this won't be an issue.' However, it takes expensive and complex technology to achieve higher average rack cooling density on the facility side, and critical trade-offs on the server side....this the fine print. Unfortunately, purchasing decisions are being made on the number of servers vendors can stuff into a rack unit without truly looking at the fine print.

Let's look at real power and cooling limitations of modern data centers;  not aging facilities that are long in the tooth, but facilities being built today to last another 10-15 years. What are the power and cooling design guidelines? To discuss these factors, we invited Vali Sorell, the Chief Critical Facilities HVAC Engineer at Syska Hennessy, one of the largest data center design firms in the world, to provide input:

Typical power per rack in enterprise-type data centers today rarely exceeds 8 kW. There are always some high power racks or cabinets in the typical data center, but overall averages are usually considerably LOWER than the IT load planners had anticipated. The fact that there are pockets of higher density should not affect the way in which racks and equipment are purchased. In those isolated cases, cooling provisions can be made to account for it. This brings up a few points that need to be considered when planning a data center:

With the exception of high performance computing applications, specifically ones in which cabling distances affect the speed to a solution, there is not a need to densify. Densifying when it is actually not called for creates situations in which high power servers are all grouped into a small number of slots. Without proper air flow management, providing a cooling solution for that layout can be complicated, and is often overlooked. Often, data center owners spend too much time "minding the gap" between adjacent cabinets, and miss the issue that blatant gaps inside cabinets are just as harmful to effective operation. Even if blanking panels are used to stop the resulting gaps, some degree of internal recirculation and bypass is promoted through the operation of the high density servers and their use of the higher power server fans. As a result, the use of higher density servers can result in lost efficiency, poor internal air flow, and higher entering IT equipment temperatures.

Densifying is not cost effective. To deliver cooling and power to a cabinet populated to 20 kW will cost more than 2X than doing the same for twice as many cabinets populated to 10 kW. The typical argument FOR densification is that it will use less floor area. That approach misses the bigger picture that the back-of-house spaces, which deliver cooling and power to the cabinets, are not affected by densification. The only determinant of back-of-house floor areas is the TOTAL power delivered to the data hall. For high density installations, that back-of-house floor area to data hall ratio could be upwards of 5 to 1; for lower densities it could be 2 or 3 to 1. Adding more complexity to that high density solution is that the overhead or underfloor spaces required for delivering the cooling air increases quickly as the loads per rack increase.

Additionally, with higher density, more modes of failure exist; and when a failure occurs, the response time to prevent a shutdown of a facility is significantly reduced. The net result: increase in density leads to a decrease in reliability.

 

 

The bottom line is that new data centers rarely average over 8kW per rack, and increasing average density above 12kW requires expensive supplemental cooling technology that adds complexity and affects overall data center reliability. Consider that a typical blade enclosure on average consumes between 270W-470W per rack unit (RU) depending on workload. This means a 42U rack today could easily consume over 20kW of power and cooling capacity! Vendors are creating servers today that put several nodes into a 2U package, but they aren't all sharing the Fine Print that affects customer decisions. It would sound something like this:

Density isn't they key to refreshing your data center servers -Efficiency is the Key. Cisco UCS brings the most power efficient platform to your data center by unifying the fabric and unifying management to maximize each and every rack. No other platform provides the rack efficiency both in power and in operation. Let's look at your racks and the business efficiency you will gain with Cisco UCS!

Thanks to our Cisco Power & Cooling experts, Roy Zeighami & Jeffrey Metcalf

For More Information:

  • www.cisco.com/go/ucs

  • www.syska.com


tag-icon Tags quentes : Centro de Dados UCS power and cooling hyperscale rack density high density computing

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.