Data centre cabling and cooling

Siemon Australia
Sunday, 30 March, 2014


Data centres struggle with airflow obstructions inside cabinets, underfloor and overhead due to outdated cabling and high-density situations. Add to this the problem of not being able to clean properly under the floor and you have a big dusty, nasty, underperforming mess. This article looks at the cooling and cabling problems from the ground up.

Data centres that have not practised cable abatement or removal of old cabling face serious challenges. Old cables not only cause performance problems but they can also wreak havoc on cooling.

Under a raised floor, cabling should be run in the hot aisles so it doesn’t obstruct airflow to perforated tiles. The cabling basket trays can act as baffles and help channel the cold air into the cold aisles. The pathways and spaces should be properly sized, accommodating growth, so that the areas don’t run amok. Change management is also important - old abandoned cables should be removed if they are no longer needed for current or future applications.

Attention must be paid to the direction of pathways and other obstructions under the floor. Cables should not run perpendicular to and in front of the air handler discharge vents blocking the air. If CRAC/CRAH units are on the perimeter of the room, the highest density should be at the centre of the room. 

Data centres are evolving in a rather cyclical manner. When data centres (the original computer rooms) were first built, computing services were provided via a mainframe (virtualised) environment. End users’ dumb terminals were connected via point to point with coax or bus cabling using twinax. Enter the PC and Intel based server platforms, and new connections were needed. We have gone through several generations of possible cabling choices: coax (thicknet, thinnet), category 3, 4, 5, 5e, 6. Now, the recommended 10 gigabit-capable copper choices for a data centre are category 6A, 7 and 7A channels, OM3 grade fibre for multimode capable electronics and singlemode fibre for longer range electronics. In some data centres, samples of each of these systems can still be found under the raised floor or in overhead pathways, many of which originally were point-to-point. Today, however, the ‘from’ point and ‘to’ point are a mystery, making cable abatement (removal of abandoned cable) problematic at best. This is probably the number one cause of cable spaghetti. Compounding this problem was a lack of naming conventions. If the cables were labelled at both ends, the labelling may not make sense anymore. For instance, a cable may be labelled “Unix Row, Cabinet 1”. Years later, the Unix row may have been replaced and new personnel may not know where the Unix row was.

This is why it is important to following the structured cabling standards. These sites can be remediated by running new trunk assemblies or installing cables of a new colour during an upgrade. This would make it easier to identify what can be removed when the equipment is up and running on the upgraded system.

It is also important to ensure there is proper airflow within the cabinets. One issue with 600 m wide cabinets is that there is not much room for cable plant, especially when loaded with servers or switches. One way to get the cabling away from equipment is to use wider cabinets. New cabinets have zero U cabling zones (no cabling in the horizontal space) that leverage vertical space between cabinets and help address congestion problems. Zero-U patch panels put patching ports right beside equipment ports - reducing the need for more expensive, longer cords. Shorter cords with less cable slack improve airflow and aesthetics, and simplify channel tracing.

The other cable management solution developed by server manufacturers is swing arms. Swing arms route cable horizontally across the equipment, blocking exhaust fans and spaces critical to proper hot aisle/cold aisle airflow. And they don’t always stay intact after moves, additions and changes, cluttering the back of cabinets with swing arms and hanging cables. Wider cabinets and better cable management (both vertically and horizontally) will improve flow management.

Cables placed above the cabinet can also cause problems. Overhead systems must not be run over hot aisles as they will act as a ceiling for hot air. The solution, in this case, is to run the cabling over the cold aisles in an overhead scenario.

Blanking panels and brush guard panels also help in improving thermal efficiency by preventing airflow through vacant rackmount spaces within enclosures. By isolating the front of the cabinet, these panels keep the cold air directed at the equipment where it is needed. Blanking panels help to prevent recirculation of hot air to improve a facility’s cooling effectiveness. These panels also fill empty rackmount space to conceal openings or reserve the positions for future use. Brush guard panels provide the added benefit of allowing cables to pass through the front and rear of a rack or cabinet while still providing thermal protection to maintain isolation.

Cabling is typically run to the rear of server cabinets and that is not where you want the cold air to go. It is necessary to control the static pressure under the floor to assure that cold air enters the room only where needed through tile perforations and/or grills.

The key to fixing air problems is to manage your pathways and spaces wherever they are. System and change management after installation can also help with airflow issues. Removing unwanted cables can help avoid potential problems. It is important to adhere to structured cabling standards within the data centre so that each channel is properly labelled with from and to points so the staff know what can be removed, or re-used.

Whenever a new server or switch is placed into use or decommissioned, someone should look at the adjoining cable ports and determine if the port can be re-used or removed. This is particularly true with point-to-point connections - they are nothing but long patch cords. When you see bundles of cables and data centre messes, typically point-to-point cables are the culprit, along with patch cords or jumpers that have outlived their usefulness. Some companies choose to buy custom patch cord and jumper lengths to eliminate slack in their systems.

In conclusion, an overall study including total equipment cost, port utilisation, maintenance and power cost over time should be undertaken, including both facilities and networking, to make the best overall decision.

Related Articles

Smart cities, built from scratch

With their reliance on interconnected systems and sustainable technologies, smart cities present...

Smart homes, cities and industry: Wi-Fi HaLow moves into the real world

Wi-Fi HaLow's reported advantages include extended ranges and battery life, minimised...

Five ways data storage can advance your sustainability ambitions

With IT a significant contributor to energy consumption, there are considerable sustainability...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd