news

Your data centre: keep it cool, keep it safe, and keep it running

Posted: 9 October 2017 | By Nick Claxson

A little over a century ago, the small matter of an archduke being shot led to a chain of events that rapidly escalated into four years of the largest and most gruesome war the world had ever seen. This precipitated the technological development of tanks, air traffic control, flamethrowers, mobile X-ray machines and the modern sanitary towel.

Data centres aren’t what they used to be. In fact, they are completely unrecognisable from 20 years ago, especially at the sharp end of supercomputing where vast, complex AI systems demand extraordinarily high-spec physical infrastructure to perform

Today, small causes still have large consequences. Even minor advancements in information technology can have an enormously disruptive impact on one thing in particular: the data centre.

Data centres aren’t what they used to be. In fact, they are completely unrecognisable from 20 years ago, especially at the sharp end of supercomputing where vast, complex AI systems demand extraordinarily high-spec physical infrastructure to perform.

Even within an everyday enterprise environment, the demand for digital innovation is driving increasingly virtualised and software-defined IT environments that are rapidly adaptable, extensible and scalable. Everywhere, the result is a far more dynamic data centre environment; denser and more sensitive to change than ever.

Sadly, the laws of physics don’t change, which means that the data centre still has the same fundamental challenge to contend with; how do you keep it running whatever happens? That question naturally breaks down into three more: how do you keep it powered, how do you keep it cool and how do you keep it safe? Each of these gets harder to do at a sensible cost as both the scale increases and the stakes get higher.

Keeping the lights on

Intelligent systems are very CPU-intensive; the more computational grunt you need, the more electrical power you are going to draw. You might be running many of these systems within your own private cloud, or you could be offloading some of that compute power to a public cloud provider.

Sadly, the laws of physics don’t change, which means that the data centre still has the same fundamental challenge to contend with; how do you keep it running whatever happens?

If it’s the latter then you are less dependent on producing and safeguarding your own power needs, and more dependent about connectivity uptime with interconnected systems. Whichever it is, the upshot is the same; if the local power fails, you’re down the creek without a paddle.

Power protection typically takes the form of uninterruptible power supply (UPS) systems; hulking great battery-based arrays of standby power waiting to take up the slack in the event of a partial or full system outage. Scale isn’t the issue with UPS systems, but rapid scalability can be. Look for modularity of UPS design so you can grow easily and cost-effectively with a solution that is permanently right-sized to your prevailing need.

Chilling out

Greater processing loads driven on by AI and other digital initiatives directly increases heat output from IT equipment. Left to its own devices, without the proper safeguards in place, anything with a chipset will literally cook itself from the inside-out. Couple this with the fact that IT equipment continues to miniaturise and become denser, with more memory, IOPS and CPU processes in a 1U rack than many data centres used to hold in an entire cabinet.

Reuse Mother Nature’s cooling gifts (cold air and cold water) so you can save potentially millions of pounds off your electricity bill

Hence precision cooling is as critical to maintaining an effective high performance data centre environment as power protection. But, perversely enough, cooling is a major consumer of electrical power. Not only do you need to factor cooling systems into your UPS calculations; you also need to consider how much heat they produce!

Cooling is rich with innovation at the moment, both in terms of novel ways of getting cooling physically inside racks and cabinets rather than into general data centre space, but also in the field of ‘free cooling’ which seeks to capture and reuse Mother Nature’s cooling gifts (cold air and cold water) so you can save potentially millions of pounds off your electricity bill.

Locking down

With so much focus upon cyber scourges like ransomware and malware, it’s easy to overlook the rudimentary physical security risks that threaten data centre environments. Organisations are usually able to identify these risks and invest in applicable door entry, CCTV and fire suppression solutions.

Evidence repeatedly shows that physical risks are just as likely to come from accidents as malicious events

But some fail to monitor them effectively; leading them into a false sense of security. Also, evidence repeatedly shows that physical risks are just as likely to come from accidents as malicious events. You need to know what risks you are looking for.

To address this challenge, we are seeing DCIM (Data Centre Infrastructure Management) solutions gaining more traction of late. DCIM is all about delivering visibility and control over the entire data centre eco system, safeguarding the environment as a whole and giving early indications of emerging threats to specific elements of infrastructure.

DCIM is also a proactive tool that is very handy for establishing governance processes and for planning your way out of trouble to ensure maximum safety and efficiency; reducing costs as well as risks.

Nick Claxson is Managing Director at Comtec Enterprises, which provides IT Infrastructure, Data centre, and Communications solutions

Related industries

Related topics

,