Algorithm reduces data centre power use
An algorithm that could significantly reduce the power consumption of data centres around the world has been developed by Danish researchers.
One of the major downsides to heightened internet usage around the world is the impact it has had on climate, largely because of the massive amount of electricity consumed by computer servers.
In fact, studies have demonstrated that global data centres consume more than 400 terawatt-hours of electricity annually. This accounts for approximately 2% of the world’s total greenhouse gas emissions and currently equals all emissions from global air traffic. Data centre electricity consumption is also expected to double by 2025.
The transition to cleaner power has therefore become a pressing one, with the researchers from the University of Copenhagen expecting that major IT companies will deploy the algorithm immediately.
Several years ago, Professor Mikkel Thorup from the University of Copenhagen’s Department of Computer Science was among a group of researchers behind an algorithm that addressed part of the problem, by producing a groundbreaking method to streamline computer server workflows. This particular body of work ultimately saved both energy and resources. Tech giants including Vimeo and Google implemented the algorithm in their systems, with online video platform Vimeo reporting that the algorithm had reduced their bandwidth usage by a factor of eight.
Now, Thorup and two fellow UCPH researchers have developed another algorithm that makes it possible to address a fundamental problem in computer systems — the fact that some servers become overloaded while other servers have capacity left.
“We have found an algorithm that removes one of the major causes of overloaded servers once and for all. Our initial algorithm was a huge improvement over the way industry had been doing things, but this version is many times better and reduces resource usage to the greatest extent possible. Furthermore, it is free to use for all,” Thorup said.
Dramatic rise in internet traffic
The algorithm addresses the problem of servers becoming overloaded as they receive more requests from clients than they have the capacity to handle. This happens as users pile in to watch a certain Vimeo video or Netflix film. As a result, systems often need to shift clients around many times to achieve a balanced distribution among servers.
The mathematical calculation required to achieve this balancing act is extraordinarily difficult, as up to a billion servers can be involved in the system. And, it is ever-volatile as new clients and servers join and leave. This leads to congestion and server breakdowns, as well as resource consumption that influences the overall climate impact.
“As internet traffic soars explosively, the problem will continue to grow. Therefore, we need a scalable solution that doesn’t depend on the number of servers involved. Our algorithm provides exactly such a solution,” Thorup said.
According to the American IT firm Cisco, internet traffic is projected to triple between 2017 and 2022. Next year, online videos will make up 82% of all internet traffic.
From 100 steps to 10
The new algorithm ensures that clients are distributed as evenly as possible among servers, by moving them around as little as possible, and by retrieving content as locally as possible.
For example, to ensure that client distribution among servers is balanced so that no server is more than 10% more burdened than others, the old algorithm could deal with an update by moving a client one hundred times. The new algorithm reduces this to 10 moves, even when there are billions of clients and servers in the system. Mathematically stated: if the balance is to be kept within a factor of 1+1/X, the improvement in the number of moves from X2 to X is generally impossible to improve upon.
As many large IT firms have already implemented Thorup’s original algorithm, he believes that industry will adopt the new one immediately — and that it may already be in use.
Learn more about the trends and technologies driving the data centre evolution to 400 Gb and beyond.
Relatively unexplored semiconducting materials could be used to provide the telecommunications...
A large increase in internet and mobile usage has created a greater need for cost-effective...